NuPPN

General Overview
Main codes. There are PPN (single zone) and MPPNP (multi-zone) codes in NuPPN/frames directory.

Installation
Basically, follow some ReadMe files in the directories. PPN GitHub wiki page, MPPNP GitHub wiki page.
 * I did this installation with Marco via Google Hangout, 7/2/2019. This is a bit different way from what I wrote here originally.


 * 1) git clone https://github.com/NuGrid/NuPPN.git --branch modular2

NuPPN/frames/ppn/source directory contains all source fortran files. README file helps a lot. (NuPPN/frames/ppn/CODE directory in the old version).

Installation procedure is  PHYSICS = $(PPN)/physics SOLVER = $(PPN)/solver (Nowadays ACML 4.4.0 is not available as opensource)
 * 1) cp MAKE_LOCAL/Make.local.*** (I used, Make.local.soul-gfort-superLU) Make.local
 * 2) Open Make.local file and set the path (PPN) to NuPPN directory (e.g., /home/user_name/NuPPN)
 * 3) Set like this
 * 1) If necessary, you can comment out LD_PATH_LAPACK and LAPACK_LIBS (by #). These are for solver option 3 or 4.
 * 1) Change #BLASLIB = -L/usr/lib/libopenblas.a -lopenblas to BLASLIB = -lopenblas (I did this only when installing with Marco)
 * 2) Note I installed openblas, whatever necessary library by apt-get install (sudo apt-get install libopenblas-dev; 2019/9/18 for GCA server).
 * 3) (Note if it's on MAC, check brew info openblas, and then include path written in the output. For installation, brew homebrew/science/openblas may work (or simply brew openblas), but I already had it, so didn't install with these).

Now copy the Run_Template directory.
 * 1) cp -r run_template run_test, in my case.
 * 2) Set the PCD path to "source" directory in Makefile of the directory like "PCD=../source"
 * 3) Follow the ReadMe file, i.e., ("make distclean" if you wish, and then) "make". This will create ppn.exe.
 * 4) run by "./ppn.exe". You will see output files.

Check the data / Visualization / Plotting / Making a figure
"iso_massf*****.DAT" file is snap shot of each step (or time) in the stellar evolution. "x-time.data" is data from all of these steps (i.e., after everything).USE "less -S x-time.data" for better visualization. In the column, the number < 1e-15 are all noise (i.e, junk and untrustworthy) except for light particle (p,n,he) which are used in many reaction calculations, so threshold are set differently.

In NuGrid, plotting can be made with NuGridPy package. Download and install NuGridPy wiki page.

You can install by e.g., "pip install nugridpy". Otherwise, simple git clone as described in the web page above. If you chose the "clone git" method, then
 * 1) vi ~/.bashrc and set the $PYTHONPATH
 * 2) cd NuGridPy
 * 3) pip setup.py install (to be honest, I'm not sure if this is really needed because python scripts seem working ok without this(2019/9/18).). (Also note sometimes this gives error message, then you can use "python setup.py install")

(Below is correct, but better to use Jupyter notebook as described in the next). In the directory with PPN output files, like "run_test",
 * 1) python abu_chart.py (or whatever README file says; sometimes just run shell script such as ./plot.sh)
 * 2) If you saw error in python, fix it. Typical problems are about relative path such as "#from .data_plot import *" needs to be changed to "from data_plot import *", and "#from . import utils" to be changed to "import utils".

ALSO NOTE! For the Sony Linux, I installed NugridPy in both ways of github (/home/shuyaota/) and pip install nugridpy (/home/shuyaota/anaconda3/lib/python3.7/site-packages/nugridpy/ascii_table.py). But I am using github one (set my PYTHONPATH so). I think both are equivalent.

'''NOTE!!! As of 2019/10/25, gc-master doesn't need modification of python codes as above (don't do "from data_plot import *" etc, just original version like "from .data_plot import *".) I don't know why. I set PYTHONPATH correctly. Perhaps I just copied the NuGridPy from gca? (should've copied from original git-hub?)''' This means, you can't use the same python analysis scripts (that I made) between them. You need to do slight modification (see the difference of python scripts for those).

'''NOTE!!! As of 2019/10/25, WENDI doesn't need modification of python codes as above (don't do "from data_plot import *" etc, just original version like "from .data_plot import *".) In WENDI's case, I just installed NuGridPY from the scratch, i.e., git clone NuGridpy.git and then python setup.py install in the directory. I installed Anaconda3 before "python setup.py install".''' This means, currently, WENDI and GC-Master have the same python setting, while GCA (and Sony Linux) are different. I have a feeling installing Anaconda3 automatically set PYTHONPATH?? (at least no need to set PYTHONPATH=NuGridPy directory?) - Well... actually it worked first, but soon Python3 on anaconda3 is not recognized on bash... So I just linked /usr/bin/python3.6 to python by alias method (.bashrc). This is working now. But note python version is 3.6 and not 3.7.

iPython, Jupyter Notebook
Marco suggested to use "jupyter notebook". To use this interactive python (iPython), you need to install ipython and jupyter by yum or pip. Google and then you can easily install these (I did "pip install ipython" for GCA server). Also, you need to install "anaconda" if you don't have it in your computer. For example, I used this website anaconda installation on ubuntu. After installation, remember to set ".bashrc" so your terminal reads anaconda shell script each time you log in. To remove (base) in terminal, commande "conda config --set changeps1 False", which permanently make (base) disappeared. Also, if you want to install hdf5 in Anaconda3, do "conda install -c anaconda hdf5" after the normal anaconda installation. (NOTE! In my Sony Linux, I don't know why but Python3 is disappeared from Jupyter Notebook (only Python2 available).. Checked "jupyter kernelspec list" gave same results. So reinstalled python3 in Jupyter Notebook by "python3 -m pip install ipykernel" and then python3 -m ipykernel install --user". This worked and now I can use Python3 in Jupyter Notebook (2019/9/21).

(In run directory such as "RUN_TEMPLATE" and "examples", there is a directory "Notebook". ipython script is located typically there. And the command "jupyter noteba=p.xtime(".")


 * 1) command "jupyter notebook"
 * 2) press "New" (in the right top corner)" and choose Python3.
 * 3) Here's what I did in the run_test's case.
 * 4) "import ppn as p"
 * 5) a=p.xtime(".")
 * 6) a.cols
 * 7) a.get('N  14') or whatever.
 * 8) then plotting (see how to plot in examples directory's python files)

Input files
Note PPN is the code and many astrophysical environment is achieved by changing input files. Typical input files are


 * 1) ppn_frame.input; handles initial condition like time-temperature-rho curves
 * 2) ppn_physics.input; nuclear network, initial abundances
 * 3) ppn_solver.input; solver (computation options)

Note trajectory.input should be prepared by outputs from MESA calculations. MESA calculations for various conditions, however, are already made by NuGrid team and stored in Canfar. Download LOGS/star.log from there and extract trajectory by "python star_log2ppn_trajectory.py .", which gives trajectory data in PPN input format. If what you need does not exist in Canfar, calculate MESA and use star_log2ppn_trajectory.py or python codes that I wrote and extract trajectories from history.data (usually in LOGS directory).

Also, note in trajectory file, you can set time in different ways such as YRS (year from the beginning) and DTY (delta year; interval years between each step). Also note sometimes T8K are used instead of T9K for temperature.

Trajectory is used only in PPN and not used in MPPNP.

Examples
There is an examples directory, which contains a lot of PPN, MPPNP examples. Some examples can be used by running "./ppn.exe" after "make", while some needs to run by "./run_me.sh" which contains information about trajectory file (time:temperature:density).

Tips
- To change a reaction rate of a given target nucleus, add followings in ppn_physics.input

rate_index(1) = 4270 rate_factor(1) = 100.0 where, rate_index[0] is 4270. Note rate_index is array with the size of 10. 4270 means 95Zr(n,g). You can find the number from networksetup.txt file in your directory where you are running. Rate_factor is self-explanatory, just a multiplication factor. The multiplication is done in "evaluate_rate.F90" in physics/source directory.

- Note how to set up isotopedatabase.txt. To run plot_elemental_abund.py (or whatever use elemental_abund function), you need to use full set of database (5307 or something like that). However when you use it, your networksetup.txt doesn't recognize reaction rates of all nuclei, and you only get 1000 or some nuclei (mostly up to only As). To avoid this, you need to run with a simple set of database (~1100), and then run with the full set when you run elemental_abund. You probably are able to fix this problem by just changing "NNN = 1107 (or whatever)" in ppn_physics.input, but not success yet.

- How to read networksetup.txt. Check "read_networksetup" in your networksetup.F90 (physics/source)

do i = 1, nrnc1 read( nwsetup_fh, fmtreactionread ) nnnr, considerreaction(i), & k2(i), spe1, k4(i), spe2, k8(i), spe3, k6(i), spe4, & v(i), lab(i), labb(i), ilabb(i), rfac(i), bind_energy_diff(i) As you see above, each column denotes like that. So the last column is binding energy (Q-value) in erg (find comments about this in a source code). v(i) means reaction rate at index=1, so v(4270) = 95Zr(n,g) reaction rate. considerreaction(i) = T/F, nnnr denotes (should be) index number, i.

- In general, codes in physics/source directory are all useful. So read them when you have some questions.