Tags:
, view all tags

Analysis in SPRACE 2012

Available Datasets:

All datasets available in SPRACE (which are AOD or AODSIM) can be found in this link. For 2012, we have available the following real data datasets:

  • Single Muon
  • Double Electron
  • MET

Run2012 datasets can be found in this link

Summer12 MC datasets can be found in this link

JSON files

JSON files for 2012 Runs at 8 TeV can be found in this link.

Analysis steps

These links should be useful.

Online list of runs, triggers, etc.

General Analysis Strategy

In general, we advocate the following strategy:

  1. Download datasets to SPRACE (optional)
  2. Skim on basic reconstructed quantities // trigger bits. Run on GRID with CRAB. Save at SPRACE
  3. Make Pattuples contataining everything you need for your analysis. Run on these using Condor. Save at SPRACE
  4. Make basic ROOT ntuples containing very basic information for optimization // plots. Run on these at the interactive access server and/or your laptop.

Strategy for Real Data - skimming

  1. Get the most recent JSON file for the link above
  2. If you have already run on some data, do the difference in between the data you've already run upon and the new data with:
       compareJSON.py --sub <mostRecent.json> <dataAlreadyUsed.json> <fileForNewDataOnly.json>
       
  3. Setup a CRAB job with the file for the new data only:
    [CRAB]
    jobtype = cmssw
    scheduler = glite
    use_server = 0
    
    [CMSSW]
    datasetpath=/MET/Run2012A-PromptReco-v1/AOD
    pset=rsanalyzer_JetMET_skimming_Run2012A_cfg.py
    total_number_of_lumis=-1
    number_of_jobs = 75
    lumi_mask=fileForNewDataOnly.json
    get_edm_output = 1
    
    [USER]
    copy_data = 1
    return_data = 0
    storage_element = T2_BR_SPRACE
    user_remote_dir = /MET_Run2012A-PromptReco_v1_2012May10
    ui_working_dir = myWorkingDirName
    
    [GRID]
    ce_white_list = T2_BR_SPRACE
       
In this example, we're running on the /MET/Run2012A-PromptReco-v1/AOD with the rsanalyzer_JetMET_skimming_Run2012A_cfg.py configuration file. We're setting up a task with around 75 jobs, and we will copy the output to the remote directory /MET_Run2012A-PromptReco_v1_2012May10, which lives in srm://osg-se.sprace.org.br:8443/srm/managerv2?SFN=/pnfs/sprace.org.br/data/cms/store/user/yourUserName/MET_Run2012A-PromptReco_v1_2012May10. Naturally, you have to setup these values for the ones you want.
  1. Do the usual CRAB thing to get the output, but you also want the final report:
crab -status -c myWorkingDirName
crab -getoutput -c myWorkingDirName
crab -report -c myWorkingDirName
This will produce a JSON file which resides in myWorkingDirName/res/lumiSummary.json. This file represents exactly the data over which you ran over, taking into account failed jobs, blocks of data which were not yet available, etc. This is the "dataAlreadyUsed.json" that you should use for the next time! To get the amount of luminosity that you ran over, use the lumiCalc2.py script:
lumiCalc2.py -b stable -i lumiSummary.json overview

-- ThiagoTomei - 09 May 2012

Edit | Attach | Print version | History: r11 | r8 < r7 < r6 < r5 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r6 - 2012-05-14 - ThiagoTomei
 

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback

antalya escort bursa escort eskisehir escort istanbul escort izmir escort