/
Magnetic Resonance Center and Magnetic Resonance Center and

Magnetic Resonance Center and - PowerPoint Presentation

AdventurousAce
AdventurousAce . @AdventurousAce
Follow
342 views
Uploaded On 2022-07-28

Magnetic Resonance Center and - PPT Presentation

Department of Chemistry University of Florence rosatocermunifiit A competence center to serve translational research from molecule to brain Antonio Rosato mobrainegieu Brings the micro WeNMR and macro N4U worlds together into one competence center under EGI Engage ID: 930969

egi gpu task cloud gpu egi cloud task cryo wenmr scipion data deployment mobrain ecc memory portals cpu instruct

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Magnetic Resonance Center and" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Magnetic Resonance Center andDepartment of ChemistryUniversity of Florencerosato@cerm.unifi.it

A competence center to serve translational research from molecule to brain

Antonio Rosato

Slide2

mobrain.egi.eu

Brings the micro (WeNMR) and macro (N4U) worlds together into one competence center under EGI Engage:

With

activities toward

:

Integrating the communitiesMaking best use of cloud resourcesBringing data to the cloud (cryo-EM)Exploiting GPGPU resourcesWhile maintaining the quality of our current services!

Slide3

Main activities

Task 1: User support and training

9 PM

Task 2:

Cryo-EM in the cloud: bringing clouds to the data

23.4 PMTask 3: Increasing the throughput efficiency of WeNMR 0 PM portals via DIRAC4EGI Task 4: Cloud VMs for structural biology 0 PMTask 5: GPU porta

ls for biomolecular simulations

14 PMTask 6:

Integrating the micro (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) VRCs 11 PM

TOTAL funded effort

57.4 PM

Slide4

WeNMR

VRC

(December 2015)

One of the largest (#users) VO in life sciences

> 720 VO registered users (36% outside EU)

> 2250 VRC members (>60% outside EU)

~ 41 sites for >142 000 CPU cores via EGI infrastructure

User-friendly access to Grid via web portals

www.wenmr.eu

NMR SAXS

A worldwide

e-Infrastructure for NMR and structural biology

Slide5

CC

Under EGI-Engage

WeNMR

has evolved into West-Life while remaining closely connected to EGI

Slide6

Sustained growth of the WeNMR VRC

Slide7

Sustained # of jobs

End of

WeNMR

EU funding

Slide8

Mainly grid, but also

FedCloud

(see the

cryo

-EM

activites

later)

CVMFS

for software deployment

Currently both

gLite

and

DIRAC4EGI

submission

mechanisms

West-Life

(and related projects)

rely on EGI resources

HADDOCK

portal jobs

Slide9

Task 1: User Support and Training

Presentation advertising EGI / WeNMR /

NeuGrid

resources

at various national and international conferences

Publication of the HADDOCK2.2 webserver in J. Mol. Biol. (EGI Engage acknowledged) (http://dx.doi.org/doi:10.1016/j.jmb.2015.09.014 )Publication of a novel structural comparison approach in Scientific Reports (EGI Engage acknowledged) (http://www.nature.com/articles/srep09486 )Other publications submittedA training workshop sponsored by INSTRUCT involving all MoBrain components (GPUs, Clouds) will be held

next week

at Utrecht Universityhttps://www.structuralbiology.eu/update/events/instruct-practical-course-advanced-methods-for-the-integration-of-diverse-structural-data-with-n-367

/

Slide10

Support for

MoBrain

/West-Life

activites

75M CPU hours, 50 TB storage from 7 sites

INFN-PADOVA (Italy)

RAL-LCG2 (UK)

TW-NCHC (Taiwan)

SURFsara

(The Netherlands)

NCG-INGRID-PT (Portugal)

NIKHEF (The Netherlands)

CESNET-

MetaCloud

(

Czech Republic)

New SLA agreement

Slide11

Task 2: Cryo

-EM in the cloud

Main goal:

Bring computational tools to the large

cryoEM

datasets, enabling data processing and analysisLargely based on the SCIPION frameworkScipion is an image processing framework to obtain 3D models of macromolecular complexes using Electron Microscopy (3DEM).

It integrates several software packages

with an unified

interface. Scipion workflows transparently combine

different software tools and

track all steps (so they can be reproduced later

)

http://scipion.cnb.csic.es/

Slide12

T2.1: Scipion deployment at Federated Cloud

Deployment at CESNET MetaCloud site (VO enmr.eu)

4 nodes cluster with 8 CPUs/node and 32 Gb/node plus 2 Tb of block storage.

Real

Cryo

-EM testing at our laboratory.Working on a Scipion image to be published in the EGI AppDB marketplace.Task 2: Cryo-EM in the cloud: bringing clouds to the data

Slide13

T2.2: Scipion deployment at other NGI sitesDeployment at SurfSARA site (VO enmr.eu)

Single fat node with 63 CPUs and 247 Gb plus 1.5 Tb of ceph storage.Real Cryo-EM testing at our laboratory.

First version of image produced.

Second deployment for Instruct course in Utrecht: 12 virtual machines with Scipion installation for the hands-on Scipion tutorial.

Task 2:

Cryo-EM in the cloud: bringing clouds to the data

Slide14

Task 2: Cryo

-EM in the cloud: bringing clouds to the data

T2.3: Scipion deployment at Instruct EM sites

Deployment by SurfSARA to support NeCEN cryo-EM center

Performed a live demo to NeCEN scientists to show EM-processing at SurfSARA cloud

Planned visit to NeCEN to help with Scipion installation both locally and at SurfSARA cloud

Slide15

Task 5: GPU portals for biomolecular simulations

Main goal

Deploy the AMBER and/or GROMACS packages on GPGPU test beds, develop standardized protocols optimized for GPGPUs, and build web portals for their use

For this, we just recently concluded an extensive set of benchmarks, described in DL 6.7 (see

MoBrain

wiki). Some are discussed in the next slide, more in the accelerated computing session on Friday.AdditionallyDevelop GPGPU-enabled web portals for exhaustive search in cryo-EM density. This result links to the cryoEM task 2 of MoBrain

MD refinement script

MD refinement script

Slide16

AMBER whit GPUs and Ferritin MD calculation

The M homopolymer

M ferritin

from

bullfrogTOTAL MW 480 kDa24 subunits 175 aa each

C2 axes

C3 axes

C4 axes

Octahedral (432) symmetry

8 nm

12 nm

ferritin

with

solvent

:

178910

atoms

Slide17

Cluster based on 3 Worker

Nodes:Each: 2 x XEON E5-2620 v2 64 Gb RAM 2 x K20m

Total of 36 CPU core and 6 GPU 192 Gb RAM

Queue Manager:

PBS Torque 4.2.10 compiled with NVML library support

Scheduler: Torque scheduler, Maui 3.3.1 (With GPU patch)Libraries:CUDA version 5.5/6.5, OpenMPIGPUS staustus with pbsnodes -a....... gpus = 2 gpu_status = gpu[1]=gpu_id=0000:42:00.0;gpu_pci_device_id=271061214;gpu_pci_location_id=0000:42:00.0;gpu_product_name=Tesla K20m;gpu_display=Enabled;gpu_memory_total=4799 MB;gpu_memory_used=13 MB;gpu_mode=Exclusive_Thread;gpu_state=Unallocated;gpu_utilization=0%;gpu_memory_utilization=0%;gpu_ecc_mode=Enabled;gpu_single_bit_ecc_errors=0;gpu_double_bit_ecc_errors=0;gpu_temperature=21 C,gpu[0]=gpu_id=0000:04:00.0;gpu_pci_device_id=271061214;gpu_pci_location_id=0000:04:00.0;gpu_product_name=Tesla K20m;gpu_display=Enabled;gpu_memory_total=4799 MB;gpu_memory_used=13 MB;gpu_mode=Exclusive_Thread;gpu_state=Unallocated;gpu_utilization=0%;gpu_memory_utilization=0%;gpu_ecc_mode=Enabled;gpu_single_bit_ecc_errors=0;gpu_double_bit_ecc_errors=0;gpu_temperature=20 C,driver_ver=319.82,timestamp=Wed May 13 12:01:48 2015Software:Amber 14 suite compiled with GPU and MPI supportExemple of PBS submission file:#PBS -l nodes=1:gpus=2,walltime=20:00:00hostname -asource ${PBS_O_WORKDIR}/set_cuda.shcd ${PBS_O_WORKDIR}export AMBERHOME=/nfs_export/gpucluster/bin/amber14//usr/lib64/openmpi/bin/mpirun -np 2 $AMBERHOME/bin/pmemd.cuda.MPI

GPU Florence testbed System

Slide18

Benchmarks for AMBER

Comparison of the performance achieved using single/multi-core CPUs

vs

one GPU

card.

Our unit is

how much simulation time you can compute in one wall day

(ns/day)

Slide19

Benchmarks for GROMACS

The MD software GROMACS was benchmarked on a GPU cloud

at the IISAS-

GPUCloud

- NGI_SK EGI federated cloud site.

The VM was Intel™ E5-2650v2 @ 2.6 GHz CPU, 8 GB RAM, and NVIDIA™ Tesla™ K20m GPGPU

At variance with AMBER, the performance of GROMACS depends also on the CPU server available

By comparing the ns/day achieved using 8 cores with/without GPU, we calculated an

acceleration of 2.8x

This acceleration is in line with other published benchmarks, suggesting that there is no significant overhead due to the virtualization of GPUs

Slide20

AMPS-NMR GPU Portal

Slide21

Task 5 – to do

Optimize the protocol for MD-based refinement of protein structures developed by the CIRMMP partner

for GPGPUs

Provide additional portals (possibly – talk on Friday @ the Accelerated Computing session)

Slide22

Task 6: Integrating the micro (WeNMR/INSTRUCT) and macroscopic (NeuGRID4you) VRCs

The MoBrain Portal

Slide23

Slide24

West-Life

(and related projects) rely on EGI resources

World-wide: ~ 144’000 CPU cores

from

41 sites (EGI & OSG)

(Stats Jan. 2016)

WeNMR data from 2014-2015