EPF LOCAL SITE (How to use this page)
ldd filename
To find
out which libraries an executable file needs. (Børge)
xterm -T MyTitle -e slogin
susy &
opens a window on susy (must write password)
with title MyTitle. (Børge)
To run PAW in the background
without the graphical HIGZ-window:
cp MyMacro.kumac
~/.pawlogon.kumac; echo 0 | paw > MyLog &
(Børge)
To print 2 or 4 or more pages per
page, use psnup (works for ps-files or text files):
psnup -2 -d -m0.5cm -pa4 1.ps > 2.ps
Or directly to printer:
psnup -2 -d -m0.5cm -pa4 1.ps | ppr -Pfys3
-p specifies page format a4
-d gives a frame around each page
-m set margin
Man page
(Børge)
To merge several (text/ps) files
into one common ps-file:
mpage -1 file1 file2 file3 >
files.ps (Børge)
Another tool for manipulating ps-files: psshuffl (Børge)
The priority of your job (w.r.t.
CPU consumption) is governed by the nice-value it is given. Highest
priority is 0 (default), lowest 19. Inside top a process
can be 'reniced' by typing r, giving the ID and the new
nice-value (can only lower the priority), or it can be set at
once:
nice -10 FindHiggs.exe
Suggestion for a standard: use nice=15 for a standard background process (e.g. when we generate monte carlo on susy). This will allow us to put lower priority (nice=19) on jobs which are not so urgent, or higher (nice=10, even nice=5) if a job is urgent and has to compete with other jobs. (Børge)
To stop a process for a while, and start it again:
kill
-STOP process_id
kill -CONT process_id
(Børge)
A macro for drawing confidence levels etc in ROOT can be found here. To see a similar result click this. (Sigve)
Some code to normalize, add and draw histograms created from hbook ntuples can be found here. To see a similar result click this. (Sigve)
This page gives the "translation" of some commonly used PAW commands into ROOT's interactive interpreter CINT: PAW to CINT/ROOT. (Yuriy)
fy104 exercise: This is a root-based analysis of old Bhabha
scattering data from DELPHI's Small Angle Tagger
Documentation
can be found in local file
/mn/susy/particle/epf/fy104_bboev/doc/
The
root files used in the exercise are stored in
/mn/susy/particle/epf/fy104_bboev/data/
The
user files (which should be copied into the user's working directory
before starting root) are stored in
/mn/susy/particle/epf/fy104_bboev/bruker/
An example script of how to draw
feynman diagrams in LATEX can be found here.
To process such a script do (you may have to do rm sigpics*
first and you will need the files feynmf.mf and feynmf.sty found
at the same location):
latex signal
mf '\mode=localfont;' input sigpics.mf
latex signal
dvips
-o signal.ps signal
gv signal.ps
More sexy diagrams can be drawn with ROOT. If you are interested just see the ROOT hints. My personal opinion is that ROOT is nice for slides and posters, but LATEX for articles etc. (Sigve)
Alex has started a collection of Feynman diagrams for ROOT
Generated process, ISUB, on atlfast ntuple.
Due to
a bug in atlfast, the generated subprocess of a given event is not
defined. As this information is sometimes very useful, I include
below corrections to atlfastntup.F to get the correct ISUB to the
ntuple. Corrections are between *LaB-lines. (Due to lack of html
knowledge FORTRAN lines are not properly indented.)
#include <atlfast/ptrigger.inc> *LaB *LaB Legg inn common med prosess-info fra pythia *LaB (systemet ISUB/IPROCESS her er feil) integer MINT double precision VINT COMMON/PYINT1/MINT(400),VINT(400) *LaB *LaB . . . *analysis and ntuple filling ELSEIF(MODE.EQ.0) THEN * *process ID inls isub PYTHIA *LaB *LaB No_IPROCESS_defined... ISUB=IPROCESS ISUB = MINT(1) *LaB *LaB
How to get .root files as output instead of .hbook and .ntup
The default
output files as described in the Atlfast jobOptions file are .ntup and .hbook.
One has to convert these files to .root files in order to look at them in root.
By adding a few lines in the jobOptions file it is possible to make Atlfast create
.root files directly. The necessary modifications of the jobOptions file are described
below, I have kept the default code as comments for comparison.
ApplicationMgr.DLLs += { "RootHistCnv" }; //ApplicationMgr.DLLs += { "HbookCnv" }; . . . // HBOOK OUTPUT: // This is the name of the file where your histograms will be created. // ApplicationMgr.HistogramPersistency = "ROOT"; //ApplicationMgr.HistogramPersistency = "HBOOK"; HistogramPersistencySvc.OutputFile = "hbook.root"; //HistogramPersistencySvc.OutputFile = "atlfast.hbook"; //NtupleSvc.Output = {"FILE1#ntuple.hbook" }; //NtupleSvc.Type = 6; NtupleSvc.Output = {"FILE1 DATAFILE='ntup.root' TYP='ROOT' OPT='NEW'"}; //NtupleSvc.Output = {"FILE1 DATAFILE='atlfast.ntup' TYP='HBOOK' OPT='NEW'"};
As of 14 December, 2001 the university is running an old version of Condor (6.1) and we are running a modern version of RedHat such that our GNU c-libraries are not quite compatible with the condor_compile command. If you append `condor_glibc` at the end of the condor_compile command you should avoid the fatal compilation error concerning uname. The condor_glibc command will be updated along with Condor such that your compilation scripts and Makefiles should still work. The man pages for the condor (version 6.3, so a bit more advanced than UiO)commands (except condor_glibc which is entirely local and hopefully temporary) are here.
Example: condor_compile g77 -o mytest mytest.f `condor_glibc`
condor_q - checks the status of your jobs
condor_status - displays the status of Condor at UiO
condor_submit <filename> - submits the Condor job in <filename>
NB! If the condor commands don't work try: source /etc/profile
Example of a Condor job:
#################### ## ## Test Condor command file for BABAMC benchmark on some local EPF machines ## 4 identical jobs are submitted ("queue 4") #################### universe = standard executable = bench_condor.remote output = bench.out.$(process) error = bench.err log = bench.log Maskiner = (Machine == "tagger.uio.no" \ || Machine == "vertex.uio.no" \ || Machine == "tracker.uio.no" \ || Machine == "helicity.uio.no" \ || Machine == "fyspc-epf01.uio.no" \ || Machine == "fyspc-epf09.uio.no" \ || Machine == "fyspc-epf18.uio.no") requirements = arch == "INTEL" && opsys == "LINUX" && $(Maskiner) queue 4
Very attractive Python resources are collected under this page. (Sigve)
A small script for killing all your jobs on a machine... here. (Sigve)
A script for searching expression in files, extracting a number ... here. (Sigve)
Suggestions concerning considerable structural changes should be
directed to Børge
Last
modified: Tue Sep 17 12:35:08 CEST 2002