search $SMEI/ucsd/gen/exe/linux/search
[Previous] [Next]
 NAME:
	search
 PURPOSE:
	Roundabout way of doing 'grep string files'
 CALLING SEQUENCE:
	search "string" wildcard
 INPUTS:
	string		string to be searched for
	wildcard	wildcard specification for group of files
 MODIFICATION HISTORY:
	Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd $SMEI/ucsd/gen/python/skyd.py
[Previous] [Next]
 NAME:
	skyd
 PURPOSE:
	Used to set up the indexing script skyd_wait.py as a
	daemon process.
	skyd.py should not be called directly, but
	should be controlled with the bash script skyd.
 MODIFICATION HISTORY:


skyd_alarm $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_alarm
 PURPOSE
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_cat $SMEI/ucsd/gen/python/skyd_orbit.py
[Previous] [Next]
 NAME:
	skyd_cat
 PURPOSE:
	Manipulates the orbit catalogue
 CALLING SEQUENCE:
	status = skyd_cat(filepath,checkver,prog_version,overwrite,ignore_cat,task,orbnr,min_orbit,max_orbit,status)
 INPUTS:
	filepath	string		file name of user catalogue
	checkver	integer		0: do not check version number
					1: check version number
	prog_version string		smei_skyd version number
	overwrite	integer		0: do not overwrite existing skymap
					1: overwrite existing skymap
	task		string		task to be performed
					one of 'set_busy','set_make','set_done','set_skip'
					'set_busy': select an orbit for indexing
	orb_nr		string		orbit number (as a string!, e.g. '2012'
					used for task 'set_make', 'set_done' and 'set_skip'
	min_orbit	string		lowest orbit number to be processed
	max_orbit	string		highest orbit number to be processed
	status		dictionary	status dictionary
 OUTPUTS:
	status		dictionary	status dictionary
					on success:
					status['number'] and status['message'] are not modified
					(should be 0 and blank string, respectively)
					key 'orbit' contains the full record from the catalogue
					for the relevant orbit.
					on failure:
					status['number' ] is set to 1
					status['message'] contains error message
					Possible reasons:
					- skyd_claim failed to claim the catalogue
					- invalid task
					- no orbit left (task 'set_make')
					- input orbit orb_nr is not marked "busy"
					- skyd_release failed to release the catalogue
 CALLS:
	skyd_claim, skyd_release, skyd_status
 CALLED BY:
	skyd_orbit
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS)
	DEC-2005, Paul Hick (UCSD/CASS)
		Modified to skip empty lines and comments (beginning
		with # character).
	MAY-2007, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		When looking for a "busy" orbit allow also "done" and "make"
		to pass. If this happens the orbit is set to "make" and hence
		will be redone.


skyd_claim $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_claim
 PURPOSE:
	Reads the specified file 'filepath' and adds its content to the
	returned status dictionary.
	If claim=True then the file is 'claimed' by enabling read protection
	This makes the file unavailable until read protection is switched off
 CALLING SEQUENCE:
	status = skyd_claim(claim,status,filepath)
 INPUTS:
	claim		bool	
	status		dictionary	status['number'] should be 0
	filepath	string		name of configuration file or orbit catalogue
 OUTPUTS:
	status		dictionary	on success:
							status['number'] and status['message'] are not modified
								(should be 0 and blank string, respectively)
							status['contents'] is the file contents
								as a string array
							status['chmod'] is the rwx-mode of filepath
								(claim=TRUE only) 
							on failure
							status['number' ] is set to 1
							status['message'] contains error message
							status['contents'] and status['chmod'] will not be present.
 CALLS:
	skyd_status
 CALLED BY:
	skyd_cat, skyd_read_conf
 SEE ALSO:
	skyd_release
 PROCEDURE:
	Used to control updates to the daemon configuration file and to the
	catalogue of orbits on which the daemon operates.

	claim=TRUE:
		A while loop runs for 5 seconds until is is readable.
		On succes the file is read, read protection is switched on. This 'claims' the
		conf file until it is 'released' by skyd_release switching read
		protection off again. Failure means either that the file was not readable
		(presumably because a previous call already claimed it, and the read
		protection is still on) or because there was a read error.
	claim=FALSE:
		One attempt is made to read the file.
		Failure means that a read error occurred.
 MODIFICATION HISTORY:
		DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_count_runs $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_count_runs
 PURPOSE:
	Count processes that haven't finished yet
 CALLING SEQUENCE:
	count = skyd_count_runs(lst_runs)
 INPUTS:
	lst_runs	dictionary with info about processes
			running on grunts
 PROCEDURE:
 MODIFICATION HISTORY:
	DEC-2005, Paul  Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_ctrlc $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_ctrlc
 PURPOSE
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_empty_run $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_empty_run
 PURPOSE:
	Set up an empty run with status 'dead'
 CALLING SEQUENCE:
	run = skyd_empty_run()
 OUTPUTS:
	run		dictionary with fields set for 'dead' process
 PROCEDURE:
 MODIFICATION HISTORY:
	DEC-2005, Paul  Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_find $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_find
 PURPOSE:
	Match a file in reportdir against a process in the old_runs
	or new_runs dictionary.
 CALLING SEQUENCE:
	rtn = skyd_find()
 OUTPUTS:
	rtn	dictionary	dictionary returned from skyd_find_run
				with one extra item if a matching report
				was found:
				'list' is set to 'old_runs' or 'new_runs'
				reflecting the process list
 CALLS:
	skyd_find_run
 PROCEDURE:
	The content of reportdir is picked up. This will include all
	report files, and possible some other stuff too.
	First the old_runs is checked for a matching report file.
	If unsucessfull, the new_runs list is tried.
	It is essential that the old_runs list is tried first.
	Ideally this list is empty already. If not we want to clean
	it out before processing new_runs.
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_find_run $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_find_run
 PURPOSE:
	Match a file on a list of files to a process on a specfied list
 CALLING SEQUENCE:
	rtn = skyd_find_run( reports, lst_runs )
 INPUTS:
	reports		string array
				list of files in reportdir
				skyd_orbit runs will send SIGUSR1 signals back
				to skyd_wait signalling that a 'report' file was
				put in reportdir reporting about their status.
				These report files should be in the 'reports' array.
	lst_runs	dictionary
				old_runs or new_runs list of skyd_orbit processes
 OUTPUTS:
	rtn		dictionary
				if a matching report is found
				then the entries are
				'report'  	name of report from 'reports'
				'grunt' 	name of grunt running process
				'run'		number of run on grunt
				'status'	either 'start' or 'runs'
				'result'	'runs' if 'status'='start'
						either 'done' or 'kill' if
						'status'='runs'
				if no matching report is found
				then only one entry is present:
				'report'	set to null string
 CALLED BY:
	skyd_find
 PROCEDURE:
	An attemps is made to match one of the processes in
	lst_runs to one of the reports.
	If a process is marked 'start' then the matching
	report file has name <report>_runs or <report>_kill.
	If a process is marked 'runs' then the matching
	report file has name <report>_done or <report>_kill
	If a process is marked 'dead' then no report
	is expected for that process.
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_go $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_go
 PURPOSE:
 PROCEDURE:
	Called when a SIGUSR1 signal is received from an skyd_orbit run.
	These are send after a 'report' file has been put in reportdir.
	Try to match one of the files in reportdir to one of processes
	in the old_runs or new_runs dictionary using the watermark.
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_kill_runs $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_kill_runs
 PURPOSE:
	Kill processes running on grunts by sending them
	a SIGKILL signal
 CALLING SEQUENCE:
	skyd_kill_runs(lst_runs,boss)
 INPUTS:
	lst_runs	dictionary with info about processes
			running on grunts
 PROCEDURE:
	Sending a keyboard interrupt will cause an exception to occur
	in skyd_orbit allowing skyd_orbit to update the user catalogue.
	However this will not kill the smeidb_skyd program.
	A SIGKILL signal will kill the program but without a catalogue
	update (leaving orbits with status 'busy').
	For now we stick with SIGKILL.
 MODIFICATION HISTORY:
	DEC-2005, Paul  Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_load $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_load
 PURPOSE:
	Read conf file and set up the dictionary needed for
	tracking the skyd_orbit processes launched on all grunts.
 SIDE EFFECTS:
	Several entries in the conf file are updated/added.
	'pid' is set with pid of skyd_wait
	'boss' is set to hostname of machine that controls skyd_wait
	'time' is set to time at which skyd_wait was started

	If 'max_load' (# processes per grunt) is not set then
	'max_load' is set to 2.
	IF 'grunts' (list of machines that run skyd_orbit) is not
	set then 'grunts' is set to 'boss' (i.e. the daemon will
	run htm_orbit locally only).
	If 'reportdir' (dir for report files) is not set then it
	is set to $TUB.
 PROCEDURE:
	The global variable new_runs is set up as a dictionary
	with one entry for each grunt listed in the conf file.
	The entry for each grunt is a list of max_proc elements.
	Each element is a dictionary with entries describing
	the process. Each process is characterized by:
		'status': 'dead','start' or 'runs'
		'wmark'	: '' or 'watermark'
		'result': '', 'runs','done','kill'
	Processes are initialized here as 'dead' with a
	blank watermark. The watermark is a filename of type
	<reportdir>/skyd_<random> with <random> a unique set
	of characters (created by tempfile.mkstemp).
 MODIFICATION HISTORY:
	DEC-2005, Paul  Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_not_running $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_not_running
 PURPOSE:
	Find run marked 'dead'
 CALLING SEQUENCE:
	run = skyd_not_running(lst_runs)
 INPUTS:
	lst_runs	dictionary with info about processes
			running on grunts
 PROCEDURE:
	Return run number for first 'dead' process found.
	If no 'dead' processes are left then return -1.
 MODIFICATION HISTORY:
	DEC-2005, Paul  Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_orbit $SMEI/ucsd/gen/python/skyd_orbit.py
[Previous] [Next]
 NAME:
	skyd_orbit
 PURPOSE:
	Set up call to the indexing program
 CALLING SEQUENCE:
	status = skyd_orbit(status)
 CALLS:
	Say(1), dict_entry, say(2), skyd_cat, skyd_status
 EXAMPLE:
	To call skyd_orbit.py directly from command line use a call like this:
		skyd_orbit.py -orbit=25182 -camera=1 -mode=2 -source=SMEIDB?
			-avoidsun -avoidmoon
			-destination=$SMEISKY0/sky/c1 -overwrite -alltheway
			-catalogue=$SKYD/list/skyd_c1m2.txt -level=3
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS)
	DEC-2007, Paul Hick (UCSD/CASS)
		Fixed bug in main section: added check for label
		cur_label after skyd_orbit finishes to make sure
		it still is present in the conf file.
	SEP-2008, Paul Hick (UCSD/CASS)
		Added argument sdark=<-1,3,10>
	JAN-2013, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Added /fix_centroid to smei_star_remove call


skyd_read_conf $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_read_conf
 PURPOSE:
	Read configuration file for SMEI indexing daemon
 CALLING SEQUENCE:
	status = skyd_read_conf(cffile,claim,status)
 INPUTS:
	cffile		string		name of configuration file
	claim		bool		passed to skyd_claim
	status		dictionary	status dictionary
 OUTPUTS:
	status		dictionary	on success:
							status['number'] and status['message'] are not modified
								(should be 0 and blank string, respectively)
							status['conf'] is a dictionary with all the conf entries.
							status['chmod'] is the rwx-mode of the original conf file
								(claim=TRUE only; added by skyd_claim)
							on failure:
							status['number' ] is set to 1
							status['message'] contains error message
 CALLS:
	skyd_claim
 SEE ALSO:
	skyd_write_conf
 PROCEDURE:
	Only called if smei_orbit is controlled by indexing daemon skyd_wait.
	Not called if skyd_orbit is called directly.
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS)
	FEB-2006, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Modifified to skip empty lines and comments (lines
		with # character at the beginnin).


skyd_release $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_release
 PURPOSE:
	Update and release the configuration file of the SMEI indexing daemon.
	Only used when daemon is run on multiple machines on the SMEI subnet,
	i.e. if the indexing is done on machines other than 'boss'.
 CALLING SEQUENCE:
	status = skyd_release(status,filepath)
 INPUTS:
	status		dictionary	status['chmod'] is rwx-mode of original of conf
							status['contents'] is content of updated conf file
							to be written into 'filepath'
	filepath	string		name of configuration file
 OUTPUTS:
	status		dictionary	on success:
							status['number'] and status['message'] are not modified
								(should be 0 and blank string, respectively)
							on failure:
							status['number' ] is set to 1
							status['message'] contains error message
							status['chmod'] and status['contents'] are
							always removed from the dictionary
 CALLS:
	skyd_status
 CALLED BY:
	skyd_cat, skyd_write_conf
 SEE ALSO:
	skyd_claim
 SIDE EFFECTS:
	Conf file 'filepath' is put back where it belongs (is 'released').
	Temporary files are cleaned up.
 PROCEDURE:
	The original mode of conf file status['chmod'] and the updated configuration
	status['contents'] are popped from the status dictionary.
	Then the updated content is written back to the conf file, and
	the original rwx-mode is set (which 'releases' the file by making it
	readable again.
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_reload $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_reload
 PURPOSE:
	Rereads the conf file and resets the dictionary needed for
	tracking the skyd_orbit processes.
 CALLING SEQUENCE:
	skyd_reload
 PROCEDURE:
	skyd_reload is called when a SIGHUP signal is received.

	skyd_reload merges the current list of processes in new_runs
	with the list of unfinished processes in old_runs.
	Then htm_load is called to set up a fresh new_runs list.

	The old_runs array will only contain processes that are
	marked as 'start' or 'runs'.
 MODIFICATION HISTORY:
	DEC-2005, Paul  Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_show_runs $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_show_runs
 PURPOSE:
	Show summary of processes running on grunts
 CALLING SEQUENCE:
	skyd_show_runs(lst_runs,grunt_,boss)
 INPUTS:
	lst_runs	dictionary with info about processes
			running on grunts
 PROCEDURE:
	Prints dictionary content
 MODIFICATION HISTORY:
	DEC-2005, Paul  Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_sighup $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_sighup
 PURPOSE
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_start $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_start
 PURPOSE:
	Start indexing runs for specified grunts until the
	maximum number of processes are running.
 CALLING SEQUENCE:
	new_count = skyd_start(grunt,count)
 INPUTS:
	grunt		string		name of computer on SMEI subnet
	count		integer		number of mintutes to delay start
					of next skyd_orbit run
 OUTPUTS:
	new_count	integer		input count, plus number of
					new processes started here
 PROCEDURE:
	All processes marked as 'dead' in new_runs[grunt]
	are selected to be launched again.
	skyd_orbit is set up to be submitted to the at batch
	queue on grunt with a delay of count minutes.
	For each process launched count is incremented by one.
	The final count value is returned.

	skyd_start is typically run in a loop like this.

	count = 0
	for grunt in grunts:
		count = skyd_start(grunt, count)

	As a result skyd_orbit are launched across all grunts
	at intervals of roughly 1 minutes. This should reduce the
	risk of multiple skyd_orbit runs access the conf file or
	the user catalogue at the same time (skyd_orbit actually
	provides some defense against this, but better safe than sorry).
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_state $SMEI/ucsd/gen/python/skyd_state.py
[Previous] [Next]
 NAME:
	skyd_state
 PURPOSE:
	Displays information about skyd_wait.py state
 CALLING SEQUENCE:
	skyd_state.py [-cffile=<cffile> -state=<state>]
	skyd_state(conf_file,state)
 INPUTS:
	conf_file	string		name of configuration file
					default: $SKYD/cf/skyd_<host>.cf
	state		string		any on of the possible states of an orbit
					('make','busy','skip','pass','done')
					default: 'busy'
 PROCEDURE:
	Accesses the configuration file and all orbit catalogues
	specified in the configuration file.
 MODIFICATION HISTORY:
	JAN-2006, Paul Hick (UCSD/CASS)
	NOV-2006, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Added counts for each version number for "done" orbits


skyd_status $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_status
 PURPOSE:
	Updates status of SMEI indexing daemon
 CALLING SEQUENCE:
	status = skyd_status(status, istat, message)
 INPUTS:
	status		status dictionary
	istat		integer		status number
	message		string		message string
 OUTPUTS:
	status		updated status dictionary
 CALLS:
	tiny.say
 CALLED BY:
	skyd_cat, skyd_claim, skyd_orbit, skyd_release
 PROCEDURE:
	Two entries in status are updated:
	status['number' ] = istat
	status['message'] = message
	The message is printed to standard output if the length
	is non-zero.
 MODIFICATION HISTORY:
		DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


skyd_wait $SMEI/ucsd/gen/python/skyd_wait.py
[Previous] [Next]
 NAME:
	skyd_wait
 PURPOSE:
	Main routine for SMEI indexing daemon. This is the program
	set up by skyd.py as a daemon
 CALLING SEQUENCE:
	skyd_wait(conf_file)
 INPUTS:
	conf_file	string		name of configuration file
 RESTRICTIONS:
	Outstanding issues:

	Sometimes the daemon crashes when trying to remove a report file,
	because the report file doesn't exist anymore. Presumably the same
	report file is picked up before the filesystem is updated after
	a remove during processing of the previous signal.
	It should be possible to make sure that the same report file is
	processed only once.

	I'm not sure what happens if the two report files (_runs and
	_done (or _terminate) are present at the same time. This could
	happen when processing many signals in a short time.
 PROCEDURE:
	There are a number of requirements for this whole scheme to work.

	0. all software must be accessible from all machines.
		Currently everything is located in the SMEI software tree on the SMEI
		server (which is NFS exported to all machines on the subnet)
	1. sshd must be running on all machines (boss launches remote skyd_orbits
		by issuing a command over ssh; grunts send back signals to boss lauching
		the kill command over ssh).
	2. all grunts must have an account with the same username as the account
		on boss (running skyd_wait). Moreover, the accounts must allow access
		in both directions using SSH keys (i.e. no passwords needed).
		(see ssh_to.py to set this up)
		Currently an account with user name skyd exists on all machines on the
		the subnet.
	3. the batch utility 'at' must be installed on all grunts
		(skyd_orbit is launched as a batchjob using 'at')
	4  the configuration file and user catalogue must be located
		where all machines can access it (e.g. in a directory on a shared
		NFS volume), and all machines should have read & write permission,
		and should be able to switch read-protection on/off.
	5. reportdir, the directory were skyd_orbit puts the report files, also must
		be accessible by all machines.
	6. when 'at' finishes a job it sends an email to the local account (currently
		the account skyd). Email forwarding is set up on all machines sending
		all these email to the machine running the daemon. The easiest way to
		do this is to create a soft link in the home directory of the skyd
		account pointing to a .forward file located somewhere where all
		machines have read access.

	Most configuration file entries are optional with reasonable defaults
	if absent. The configuration file has the structure

	<name_1>=<value_1>
	<name_2>=<value_2>
	..
	<name_n>=<value_n>
	group_0:
	<_name_1>=<value0_1>
	..
	<_name_m>=<value0_m>
	group_1:
	<_name_1>=<value0_1>
	..
	<_name_m>=<value0_m>
	..
	..
	group_k:
	<_name_1>=<value0_1>
	..
	<_name_2>=<value0_2>

	The first block are 'global entries'. The following entries are separate
	groups with instruction to run a specific group of orbits. skyd_orbit
	calls by picking an orbit from group one, then group two, and so on,
	cycling through all groups.

	Global entries:

	reportdir=reportdir	directory where report files from skyd_orbit
				runs are put. Default: $TUB
	max_load=max_load	max number of concurrent processes run on
				each grund. Default: 2
	grunts=grunt1,grunt2	comma separated list of machines on which
				to run skyd_orbit. Default: boss

	The following main entries are set when htm_wait is started.

	boss=boss		machine controlling skyd_wait
	pid=pid			process id of skyd_wait
	time=2005_354_162151	time at which skyd_wait started
	cur_group=0		first group for which orbit is selected
				(this field incremented in skyd_orbit)

	Group entries:

	Many of these correspond to keywords needed for the indexing program
	smeidb_htm

	_camera=camera		camera id (1,2,3); default: 1
	_mode=mode		mode id (0,1,2); default: -1 (this effectively
				select the main science mode for each camera
				(mode 2 for cam 1 and 2; mode 1 for cam 3)
	_min_orbit=min_orbit	minimum orbit to run; zero means no restriction
				on minimum orbit; default: 0
	_max_orbit=max_orbit	maximum orbit to run; zero means no restriction
				on maximum orbit; default: 1
	_source=source		source for SMEI frames; default: SMEIDC?
	_destination=destination
				destination directory for sky maps
	_level=level		indexing level
	_keepglare=0/1		0: subtract glare; 1; keep glare; default: 0
	_catalogue=catalogue	user catalogue
	_checkversion=0/1	0: don't check version number; 1: update only
				if smeidb_skyd version is higher than version
				in existing skymap
	_overwrite=0/1		0: don't overwrite existing skymaps
				1: overwrite existing skymaps

	The user catalogue is an ascii file with one line for each orbit in the
	following format:

   orbnr  orbnr+1 YYYY_DOY_hhmmss status

	orbnr			orbit number to be processed
	orbnr+1			orbit number, plus one
	YYYY_doy_hhmmss	start time of orbit orbnr
	status			status of orbit; can be 'make','skip','busy','done'

	The catalogue can be constructed from a list of available skymaps
	using the IDL procedure skyd_cat.
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS)
	NOV-2006, Paul Hick (UCSD/CASS)
		The script sometimes crashed while executing the os.remove(report)
		command claiming the file didn't exist even though the file
		was just read succesfully, and the remove actually worked correctly.
		Bracketed os.remove(report) command with try block to avoid crash.
		Also added the reports_claimed dictionary to store the names of
		report files that have been identified by skyd_find_run. The name
		is deleted from the dictionary only if the os.remove() succeeds.
	JAN-2007, Paul Hick (UCSD/CASS)
		Moved initialization of global reports_claimed in front of the loop
		starting the initial indexing runs.
	MAR-2008, Paul Hick (UCSD/CASS)
		Fixed bug in processing of -overwrite keyword.
		Now if -overwrite is set for one or more groups then the
		orbit catalogue for the range of orbits specified is updated
		by changing the status for all "done" orbits to "make".
		This way, skyd_orbit only needs to look at status "make"
		if -overwrite is set. Without this change skyd_orbit will
		keep processing the same "done" orbit over and over again.
	NOV-2008, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		The call to skyd_orbit is now prefixed with ". $HOME/.bashrc".
		This did not use to be necessary. It is needed now for FC9;
		not sure why.


skyd_write_conf $SMEI/ucsd/gen/python/skyd_func.py
[Previous] [Next]
 NAME:
	skyd_write_conf
 PURPOSE:
	Write configuration file for SMEI indexing daemon
 CALLING SEQUENCE:
	status = skyd_write_conf(cffile,status)
 INPUTS:
	cffile		string		name of configuration file
	status		dictionary	status dictionary
							Should have keys 'conf' and 'chmod'
 OUTPUTS:
	status		dictionary	the return status of skyd_release
 CALLS:
	skyd_release
 SEE ALSO:
	skyd_read_conf
 PROCEDURE:
	Only called if smei_orbit is controlled by indexing daemon skyd_wait.
	Not called if skyd_orbit is called directly.
 MODIFICATION HISTORY:
	DEC-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


smei_pipeline.sh $SMEI/ucsd/gen/exe/linux/smei_pipeline.sh
[Previous] [Next]
 NAME:
	smei_pipeline.sh
 PURPOSE:
	Runs all steps in SMEI data pipeline
	Tries to avoid running multiple copies of sync_l1a.exp
 CATEGORY:
	gen/exe/linux
 CALLING SEQUENCE:
	pipeline.sh
 OUTPUTS:
	$?	exit code is number of L1A files retrieved
		If expect is already running then exit code = 0
 CALLS:
	sync_l1a.exp
 RESTRICTIONS:
	The assumption is that sync_l1a.exp is the only expect script in use.
 PROCEDURE:
	The grep command looks for lines in the ouput of "ps -ef" that contain
	"expect", but not "grep" (to avoid picking up the grep command itself).

	sync_l1a.exp needs user and password information. This is pulled out of
	$HOME/.netrc

	The current sequence of operations:

	1. Rsync from celeste
	2. Move new L1A files to $L1A_GET for unpacking
	3. Unpack new L1A files

	If a new day has been cleared for the pipeline:

	4. Updated TLEs and orbit start times
	5. Make "closed shutter" calibration patterns
	6. Calculate pedestal/dark currents
	7. (Cam 3 only) Make "on-the-fly" orbital patterns
	8. Make skymaps (sky, equ, ecl, and pnt files)

 MODIFICATION HISTORY:
	JUL-2009, Paul Hick (UCSD/CASS)
`	DEC-2009, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Expanded to include all step in the SMEI data pipeline.


smei_star_split $SMEI/ucsd/gen/python/smei_star_split.py
[Previous] [Next]
 NAME:
	smei_star_split
 PURPOSE:
	Extract all data from a collection 'pnt' files in subdirectories
	c1, c2, c3, c1m0, c2m0, c3m0 of SOURCE where SOURCE is specified
	through option --source
 CALLING SEQUENCE:
	smei_star_split [--source=SOURCE ]
 OPTIONAL INPUTS:
	--source=SOURCE		source directory for 'pnt' files;
						default: $SMEISKY0/pnt
						all 'pnt' files in subdirectories
						c1,c2,c3,c1m0,c2m0,c3m0 are processed
 OUTPUTS:
	The resulting files for individual stars are written to the
	current working directory.
 RESTRICTIONS:
	The script opens as many files as there are stars present in
	the 'pnt' files (plus planets and asteroids). The current total
	is just under 6000 objects. The script should raise a flag if
	opening this many files at the same time is not allowed.
	To be able to open this many files it is probably necessary to
	add a line to /etc/security/limits.conf, e.g. if running on the
	'soft' account add:
		soft        soft    nofile          8192
		soft        hard    nofile          8192
 PROCEDURE:
	'pnt' files are produced as byproduct of the star fitting and
	subtraction process by the IDL procedure smei_star_remove.pro.
	By default, these files are collected in subdirectories
	c1, c2, c3, c1m0, c2m0, c3m0 of directory $SMEISKY0/pnt.
	These files contain all fit information for stars in a single
	skymap for camera 1, 2 or 3.
	This script reorders the content of these files by creating
	a single file for each individual star containing all fits
	for that stars across all 'pnt' files.

	These files will initially contain six consecutive groups of
	records: one for each of the six subdirectories, with fits for
	each camera ordered chronologically.

	After the 'pnt'-rewriting is finished the resulting files
	for each star can be sorted into strict chronological order
	(independent of originating subdirectory) by running all
	files through the IDL procedure smei_star_split.pro.
 MODIFICATION HISTORY:
	JUL-2012, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Spruced up Johns original version.


smeicvs $SMEI/ucsd/gen/python/smeicvs.py
[Previous] [Next]
 NAME:
	smeicvs
 PURPOSE:
	Updates working copies in the $SMEI tree from the
	SMEI CVS repository.
 CATEGORY:
	gen/python
 CALLING SEQUENCE:
	smeicvs.py -import <directory>
	smeicvs.py -checkout <directory>
	smeicvs.py <directory>
 INPUTS:
	<directory>	fully-qualified directory in the SMEI tree
 OPTIONAL INPUT PARAMETERS:
	-import		import into repository
				Sets up all code in the tree attached to <directory>
				in the SMEI CVS repository.
	-checkout	check out of repository into $SMEI tree
 OUTPUTS:
 CALLS:
	tiny.args, tiny.is_there
 SEE ALSO:
 EXAMPLE:
	smeicvs.py -import $SMEI/ucsd/camera/for/pattern
		was used import the pattern software into the
		repository
	smeicvs.py -checkout $SMEI/ucsd/camera/for/pattern
		was used to extract the pattern software from the
		repository and put in the SMEI tree (after removing
		the original directory used in the import).
	smeicvs.py $SMEI/ucsd/camera/for/pattern
		After the initial import or export, this command
		uses the repository to update the SMEI tree
 PROCEDURE:
	The -import and -checkout options should be needed only once
	when setting up the a subdirectory in $SMEI.
	After that a call 'smeicvs.py <directory>' will update everything.

	The repository is $SMEI/.cvscode, i.e. the repository itself 
	is part of the SMEI tree).

	A subdirectory in $SMEI is imported using the fully-qualified
	directory name with slashes replaced by underscores,
	e.g. $SMEI/ucsd/gen/for will become project ucsd_gen_for.

	For the initial import and for subsequent updates the subdirectory
	is renamed from e.g. $SMEI/ucsd/gen/for to $SMEI/ucsd/gen/ucsd_gen_for.
	Then cvs is called to do the import or update. Finally the
	the directory is renamed to its original name.
 MODIFICATION HISTORY:
	DEC-2004, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


smeidb_newcal $SMEI/ucsd/gen/exe/linux/smeidb_newcal
[Previous] [Next]
 NAME:
	smeidb_newcal
 PURPOSE:
	Maintains SMEI data base of calibration patterns
 CATEGORY:
	/gen/exe/linux
 CALLING SEQUENCE:
	smeidb_newcal
 CALLS:
	smeidb_cal
 PROCEDURE:
	The ascii input file for a new weekly calibration should
	be put in $SMEIDB/cal/new. Then this script is run.
	All ascii files in $SMEIDB/cal/new are fed to smeidb_cal
	to produce a new calibration pattern in $SMEIDB/cal.
	If the exit code of smeidb_cal is 3 (indicating success)
	the corresponding ascii file is moved to $SMEIDB/cal/txt
 MODIFICATION HISTORY:
	SEP-2007, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


sophos_init $SMEI/ucsd/gen/exe/linux/sophos_init
[Previous] [Next]
 NAME:
	sophos_init
 PURPOSE:
	Install Sophos
 CALLING SEQUENCE:
	sophos_init
 RESTRICTIONS:
	The cronjob set up to run sophos_sweep assumes than
	cass247.ucsd.edu is the Sophos server, and that
	.netrc contains the username and password needed to
	mount the SMB mount.
 PROCEDURE:
	Needs Sophos tarball in working directory.
 MODIFICATION HISTORY:
	NOV-2005, Paul Hick (UCSD/CASS)


sophos_sweep $SMEI/ucsd/gen/exe/linux/sophos_sweep
[Previous] [Next]
 NAME:
	sophos_sweep
 PURPOSE:
	Script to update Sophos engine and IDE files and run sweep.
 CALLING SEQUENCE:
	sophos_sweep <email_recipients> <server> <user> <password>
 INPUTS:
	email_recipients	list of email addressed to which
				the result of the virus sweep is emailed
	server			name of Sophos server
	user			user name on Sophos server
	password		password on Sophos server
				These last three are passed to sophos_update
				If no server is specified then the update is
				skipped. Username and password can also be
				specified through $HOME/.netrc
 CALLS:
	sophos_update
 RESTRICTIONS:
 >	Can only be run on the root account
 >	/etc/sav.conf MUST exist
 >	The SAV virus data directory specified in sav.conf MUST exist.
 PROCEDURE:
	Configuration file /etc/sav.conf contains:
		SAV virus data directory = /usr/local/sav
	This is stored in local symbol SAV_DIR.
	Note that this directory should match the cache directory
	in eminstall.conf (see sophos_update)
 MODIFICATION HISTORY:
	Original written by Paul Yeatman, Oct. 26, 2004
	NOV-2005, Paul Hick (UCSD/CASS)
	    Modified for use on SMEI subnet.


sophos_update $SMEI/ucsd/gen/exe/linux/sophos_update
[Previous] [Next]
 NAME:
	sophos_update
 PURPOSE:
	Update Sophos engine and virus database
 CALLING SEQUENCE:
	sophos_update <server> [<user> <password>]
 INPUTS:
	server		Sophos server
			If not specified then the Sophos server should already
			be mounted or the procedures aborts.
	user		user name on Sophos server
	password	password on Sophos server
			If username and password are not specified then
			an attempt is made to get them from $HOME/.netrc
			If not specified and not found in .netrc the procedure
			aborts.
 CALLED BY:
	sophos_sweep
 RESTRICTIONS:
 >	Can only be run on the root account
 PROCEDURE:
	Configuration files.

	/etc/sav.conf contains: (currently NOT USED)
		SAV virus data directory = /usr/local/sav

	/etc/eminstall.conf
		EM install CID = /media/sophos_server/unixinst/linux/intel_libc6_glib2_2
		EM cache dir = /usr/local/sav
		protocol = smbfs

	The "EM cache dir" is stored in local symbol SOP_SAVE_DIR.
	(The same directory is defined in /etc/sav.conf under "SAV virus data directory")

	The part of "EM install CID" preceding '/unixinst' is stored in local symbol
	SOP_SERVER_MNTPNT (the mount point for the Samba mount to the SOP server).


	===============
	The old method:

	SAV=$(grep "SAV virus data directory" /etc/sav.conf | gawk '{print $6}')
	GET=$SAV/get
	LOG=/tmp/update_sophos.log.$$
	wget -a$LOG -P$GET http://www.sophos.com/downloads/ide/ides.zip
	unzip -o $GET/ides.zip -d $SAV/
	chmod 644 $SAV/*.ide
	rm -f $GET/ides.zip > /dev/null

 MODIFICATION HISTORY:
	Original version by Paul Yeatman
	NOV-2005, Paul Hick (UCSD/CASS)
	    Updated for SMEI cluster.


splat $SMEI/ucsd/gen/python/splat.py
[Previous] [Next]
 NAME:
	splat
 PURPOSE:
	Converts ascii files between Unix and DOS format.
 CALLING SEQUENCE:
	splat.py file_name
 INPUTS:
	file_name	name of ascii file or directory name
			If a directory is specified all files in
			the directory are converted.
 PROCEDURE:
	The script will overwrite the input file
 MODIFICATION HISTORY:
	MAY-2002, Paul Hick (UCSD/CASS)
	APR-2005, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Removed dependence on 'string' module.
		Files are now only overwritten if contents changes.


split_regex $SMEI/ucsd/gen/python/publications.py
[Previous] [Next]
 NAME:
	split_regex
 PURPOSE:
	Interpret regular expression
 INPUTS:
	regex_string	string
		comma separated list of key=value pairs.
		If the key is omitted (i.e. only the value is specified) then
		the key name is assumed to be 'key'.
		The value is optionally bracketed by single or double quotes

		key names 'key' and 'cat' are used to filter the topkeys list
 OUTPUTS:
	result			dictionary of key-value pairs
 EXAMPLE:
	input:			output:
	value1			{'key': 'value1'}
	key=value1		{'key': 'value1'}
	k=value1,value2	{'k': 'value1', 'key': 'value2'}

	'cat','key','attr' have special meaning.
	All others refer to entries associated with
	with publications: author, title, etc.


sprint_setup $SMEI/com/linux/sprint_setup
[Previous] [Next]
 NAME:
	sprint_setup
 PURPOSE:
	Add escape sequences used by $EXE/sprint to ~/LOGFIL.TXT
 CALLING SEQUENCE:
	sprint_setup
 CALLED BY:
	update_logfile
 PROCEDURE:
	This script is called by update_logfile.
	It uses the program mkenv to write a number of entries to
	~/LOGFIL.TXT. The executable must be in the PATH.

	The escape sequences control the HP Laserjet III printer only.
	It is assumed that the printer is hooked up to CASS185.
	Other machines should first set up a remote printer with the
	name 'print185'.

	Once sprint_setup has been called, the program $EXE/sprint should
	be available to print to the Laserjet III printer.
 MODIFICATION HISTORY:
	JAN-2001, Paul Hick (UCSD/CASS; pphick@ucsd.edu)


ssh_to $SMEI/ucsd/gen/python/ssh_to.py
[Previous] [Next]
 NAME:
	ssh_to
 PURPOSE:
	Simplify login between accounts on subnet
 CATEGORY:
	ucsd/gen/python
 CALLING SEQUENCE:
	ssh_to <-init> <user@host> <user2@host2> <...>
 INPUTS:
	user@host	account for which ssh access is needed
			(if the user name on the remote host is
			the same as on the account from which
			ssh_to is executed then 'user@ can be
			omitted.
 OPTIONAL INPUT PARAMETERS:
	-init		if specified it must be the first command
			line argument. Initializes ssh access
			to the remote account (see PROCEDURE).
 OUTPUTS:
	(private/public key pair; updates of local 'identification'
	file and remote 'authorization' file(s))
 OPTIONAL OUTPUT PARAMETERS:
 CALLS:
	tiny.args, tiny.is_there, tiny.keys, tiny.run_cmd, tiny.start
 RESTRICTIONS:
 >	Assumes that ssh2 creates 2048 bit dsa keys
	with key name id_dsa_2048_a for the private key
	and id_dsa_2048_a.pub for the public key.
 >	Assumes that ~/.ssh2 already exists on the remote
	machines.
 >	For the soft links to work ~/bin must be in the path.
 EXAMPLE:
	ssh_to -init user@host		(initializes user@host)
	user@host			(logs into user@host)
 PROCEDURE:
	If -init is set:
	1. Check for private key of type id_dsa_2048_a_*.
	2. If the private key does not exist, then run
	   ssh-keygen2 to generate private key id_dsa_2048_a
	   and public key id_dsa_2048_a.pub. Rename the key
	   pair to id_dsa_2048_a_$RANDOM(.pub).
	3. Put private key in .ssh2/identification
	4. Loop over all hosts, and for each
	   a. put the public key in remote ~/.ssh2 directory,
	   b. put the name of the public key in remote ~/.ssh2/authorization.
	   c. put a link to ssh2_to in ~/bin 
 MODIFICATION HISTORY:
	JUL-2004, Paul Hick (UCSD/CASS)
	JAN-2004, Paul Hick (UCSD/CASS)
		Converted from bash to Python.
	FEB-2009, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Improved lookup of files with private keys.


sync_ace $SMEI/ucsd/gen/python/sync_ace.py
[Previous] [Next]
 NAME:
	sync_ace
 PURPOSE:
 	Downloads real-time ACE data from ftp.sec.noaa.gov
	for the SWEPAM and MAG instruments
 CALLING SEQUENCE:
	usually as cron job:
	bash --login -c "sync_ace.py [-mirror -force year=<year> <dat=dat>]"
 OPTIONAL INPUT PARAMETERS:
	year		four-digit year; default: current year
					yearly ACE file to be updated
					(currently disabled)
	dat=<dat>	if specified this directory is used instead of $DAT.
					(the directory is created if it doesn't exist yet).
	-mirror		if set then the local ACE data base is updated from
					the SEC data base.
	-force		forces update of yearly file from the currently
					available ACE data.
 RESTRICTIONS:
	The directories $DAT/insitu and $DAT/insitu/ace must exist. The
	Perl script mirror is in SolarSoft ($SSW/gen/mirror/mirror)
 CALLS:
	mirror, tiny.is_there, tiny.start
 PROCEDURE:
	The ACE data are stored at NOAA in monthly files with filenames
	200002_swepam_1hr.txt, where the numbers indicate year and month.
	These monthly files are updated regularly with real-time ACE data.

	Monthly files are stored locally in $DAT/insitu/ace2. This directory
	maintains copies of all SWEPAM and MAG files from the NOAA site.
	This is done using the Perl script 'mirror'.

	If no year is specified on the command line then the current yearly
	file is updated only if new data have been down loaded from NOAA.
	If a year is specified then the file for that year is regenerated.
 MODIFICATION HISTORY:
	???-1999, James Chao, Paul Hick
		Original bash version
	JAN-2000, Paul Hick (UCSD/CASS)
		Added a couple of error checks
	AUG-2002, Paul Hick (UCSD/CASS)
		Removed explicit calls to bash startup scripts
		/etc/profile and ~/.bash_profile.
	OCT-2002, Paul Hick (UCSD/CASS)
		Rewrite of download section. Now uses Perl script 'mirror'
	DEC-2003, Paul Hick (UCSD/CASS)
		Changed solar.sec.noaa.edu to ftp.sec.noaa.edu
	DEC-2003, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Converted from bash to Python
	DEC-2018, Paul Hick (UCSD/CASS)
		Changed ftp.sec.noaa.edu to ftp.swpc.noaa.gov


sync_ips_daily [1] $SMEI/com/linux/sync_ips_daily
[Previous] [Next]
 NAME:
	sync_ips_daily
 PURPOSE:
	Controls the processing of new IPS data arriving from Nagoya.
 CALLING SEQUENCE:
	as cron job:
	bash --login -c "sync_ips_daily"

	interactive:
	sync_ips_daily [--with-correlations]
		- gets new data from Nagoya site
		- if no new data available
			- only update all images
			  on the website (correlation graphs will only
			  be updated if --with-correlations is set)
		- if new data are available
			- run corotating tomography
			- run time-depenent tomography
			- update images on website (incl. correlations)
			- update moview on website

	Several of the components can be run individually:

	sync_ips_daily --download-only
		checks Nagoya site for new IPS data and downloads
		if new IPS data are available

	sync_ips_daily --tomography
		runs the tomography (both corotating and time-dependent,
		and updates the all images (incl. correlations) and
		movies on the IPS website

	sync_ips_daily --image-update [--with-correlations]
		updates all images on the website. The correlations graphs
		are updated only if --with-correlations is set too

	sync_ips_daily --movie-update
		updates all movies on the website.
		
 INPUTS:
 OPTIONAL INPUT PARAMETERS:
	--tomography	if set the script jumps straight to running
			the tomography programs, bypassing the download
			of new daily IPS files.
	--download-only	if set the script just runs the download script
			and does not run the tomography programs

	--image-update	only updates images on IPS website
			(incl. correlation graphs if --with-correlations
			is set.
	--with-correlations
			updates correlation graphs, if set in conjunction
			with --image-updates

	--movie-update	only updates movies on IPS website

	--dry-run	a 'dry-run' currently will run the tomography
			in $TUB, and suppress copying of images and html
			files to the IPS website.

	We have available both NSO/NOAA and WSO/NOAA magnetic field data.
	These are added to the processing by defining the env vars BB_PREFIXES,
	BB_PREFIXES_SLOW and BB_PREFIXES_FAST.
	To use the same magnetic data for both corotating and time-dependent
	tomography used BB_PREFIXES:
		BB_PREFIXES='wson'		only WSO/NOAA data
		BB_PREFIXES='nson'		only NSO/NOAA data
		BB_PREFIXES='wson nson'		both WSO/NOAA and NSO/NOAA
	To use different magnetic fields for corotating and time-dependent
	tomography set the pair BB_PREFIXES_SLOW and BB_PREFIXES_FAST to the
	desired magnetic data.
 CALLS:
	ipsd, ipsdt, run_marker, run_mean, sync_ips_mirror [1], sync_ips_mirror [2]
	vox_update
 PROCEDURE:
	Finally one or more tomography programs are called to update the
	latest tomographic reconstructions.

	People on the honcho list receive several email notifications
	when a part of the script completes.
	Multiple email addresses need to be separated by a space.

	The IDL display software differentiates between corotating
	and time-dependent tomography output files by checking the 
	'marker' value. This value is coded in the filename, and is present
	in the header under the key 'Rotation counter'.

	For a corotating file the marker value in the header MUST be zero;
	and the marker is NOT encoded in the filename.

	For a time-dependent file the marker value identifies all files
	associated with the same tomography run. It has a value of 1 or higher.
	The marker is coded in the filename as a 5-digit integer.
 MODIFICATION HISTORY:
	JAN-2001, Paul Hick (UCSD/CASS)
	SEP-2001, Paul Hick (UCSD/CASS)
		Added honcho email list
	AUG-2002, Paul Hick (UCSD/CASS)
		Removed explicit calls to bash startup scripts
		/etc/profile and ~/.bash_profile.
	OCT-2002, Paul Hick (UCSD/CASS)
		Rewrite of download section. Now uses Perl script 'mirror'
		Also added the -tomography keyword.
	OCT-2002, Paul Hick (UCSD/CASS)
		The script does not abort anymore when the corotating tomography
		fails, but continues with the time-dependent program.
		All image updates are now done after both tomography programs
		have been run. The intermediate update of the corotating images
		prior to running the time-dependent tomography has been dropped.
	JUN-2003, Paul Hick (UCSD/CASS)
		Added calls to gzip to compress all nv3d*, etc., both raw
		and final versions.
	JUL-2003, Paul Hick (UCSD/CASS)
		Removed call to 'idl run_map'
	NOV-2003, Paul Hick (UCSD/CASS)
		Removed compression of final files in this script using a single
		gzip -f after the call to run_marker or run_mean. Instead this is now
		done by the IDL procedure vu_write on a file by file basis.
		The gzip -f call caused problems for the hourly forecast run while
		run_marker was creating new final files. For some time the old
		(gzipped) final files would coexist with the new (unzipped) final
		files (until gzip -f was completed). An hourly forecast run during
		this time would find multiple files referring to the same time.
		The results are blank maps at the hourly forecast time.
	AUG-2004, Paul Hick (UCSD/CASS)
		Modified selection of magnetic field data by env variables.
	APR-2011, John Clover (UCSD/CASS)
		Modified to use ipstd20n_intel and output/process nv3h files.
	APR-2013, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Substantial cleanup. More documentation.
		Switched to long options.


sync_ips_daily [2] $SMEI/com/linux/sync_ips_daily.save
[Previous] [Next]
 NAME:
	sync_ips_daily
 PURPOSE:
	Controls the processing of new IPS data arriving from Nagoya.
 CALLING SEQUENCE:
	as cron job:
	bash --login -c "sync_ips_daily"

	interactive:
	sync_ips_daily [--with-correlations]
		- gets new data from Nagoya site
		- if no new data available
			- only update all images
			  on the website (correlation graphs will only
			  be updated if --with-correlations is set)
		- if new data are available
			- run corotating tomography
			- run time-depenent tomography
			- update images on website (incl. correlations)
			- update moview on website

	Several of the components can be run individually:

	sync_ips_daily --download-only
		checks Nagoya site for new IPS data and downloads
		if new IPS data are available

	sync_ips_daily --tomography
		runs the tomography (both corotating and time-dependent,
		and updates the all images (incl. correlations) and
		movies on the IPS website

	sync_ips_daily --image-update [--with-correlations]
		updates all images on the website. The correlations graphs
		are updated only if --with-correlations is set too

	sync_ips_daily --movie-update
		updates all movies on the website.
		
 INPUTS:
 OPTIONAL INPUT PARAMETERS:
	--tomography	if set the script jumps straight to running
			the tomography programs, bypassing the download
			of new daily IPS files.
	--download-only	if set the script just runs the download script
			and does not run the tomography programs

	--image-update	only updates images on IPS website
			(incl. correlation graphs if --with-correlations
			is set.
	--with-correlations
			updates correlation graphs, if set in conjunction
			with --image-updates

	--movie-update	only updates movies on IPS website

	--dry-run	a 'dry-run' currently will run the tomography
			in $TUB, and suppress copying of images and html
			files to the IPS website.

	We have available both NSO/NOAA and WSO/NOAA magnetic field data.
	These are added to the processing by defining the env vars BB_PREFIXES,
	BB_PREFIXES_SLOW and BB_PREFIXES_FAST.
	To use the same magnetic data for both corotating and time-dependent
	tomography used BB_PREFIXES:
		BB_PREFIXES='wson'		only WSO/NOAA data
		BB_PREFIXES='nson'		only NSO/NOAA data
		BB_PREFIXES='wson nson'		both WSO/NOAA and NSO/NOAA
	To use different magnetic fields for corotating and time-dependent
	tomography set the pair BB_PREFIXES_SLOW and BB_PREFIXES_FAST to the
	desired magnetic data.
 CALLS:
	ipsd, ipsdt, run_marker, run_mean, sync_ips_mirror [1], sync_ips_mirror [2]
	vox_update
 PROCEDURE:
	Finally one or more tomography programs are called to update the
	latest tomographic reconstructions.

	People on the honcho list receive several email notifications
	when a part of the script completes.
	Multiple email addresses need to be separated by a space.

	The IDL display software differentiates between corotating
	and time-dependent tomography output files by checking the 
	'marker' value. This value is coded in the filename, and is present
	in the header under the key 'Rotation counter'.

	For a corotating file the marker value in the header MUST be zero;
	and the marker is NOT encoded in the filename.

	For a time-dependent file the marker value identifies all files
	associated with the same tomography run. It has a value of 1 or higher.
	The marker is coded in the filename as a 5-digit integer.
 MODIFICATION HISTORY:
	JAN-2001, Paul Hick (UCSD/CASS)
	SEP-2001, Paul Hick (UCSD/CASS)
		Added honcho email list
	AUG-2002, Paul Hick (UCSD/CASS)
		Removed explicit calls to bash startup scripts
		/etc/profile and ~/.bash_profile.
	OCT-2002, Paul Hick (UCSD/CASS)
		Rewrite of download section. Now uses Perl script 'mirror'
		Also added the -tomography keyword.
	OCT-2002, Paul Hick (UCSD/CASS)
		The script does not abort anymore when the corotating tomography
		fails, but continues with the time-dependent program.
		All image updates are now done after both tomography programs
		have been run. The intermediate update of the corotating images
		prior to running the time-dependent tomography has been dropped.
	JUN-2003, Paul Hick (UCSD/CASS)
		Added calls to gzip to compress all nv3d*, etc., both raw
		and final versions.
	JUL-2003, Paul Hick (UCSD/CASS)
		Removed call to 'idl run_map'
	NOV-2003, Paul Hick (UCSD/CASS)
		Removed compression of final files in this script using a single
		gzip -f after the call to run_marker or run_mean. Instead this is now
		done by the IDL procedure vu_write on a file by file basis.
		The gzip -f call caused problems for the hourly forecast run while
		run_marker was creating new final files. For some time the old
		(gzipped) final files would coexist with the new (unzipped) final
		files (until gzip -f was completed). An hourly forecast run during
		this time would find multiple files referring to the same time.
		The results are blank maps at the hourly forecast time.
	AUG-2004, Paul Hick (UCSD/CASS)
		Modified selection of magnetic field data by env variables.
	APR-2011, John Clover (UCSD/CASS)
		Modified to use ipstd20n_intel and output/process nv3h files.
	APR-2013, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Substantial cleanup. More documentation.
		Switched to long options.


sync_ips_daily [3] $SMEI/com/linux/sync_ips_daily_v17
[Previous] [Next]
 NAME:
	sync_ips_daily
 PURPOSE:
	Controls the processing of new IPS data arriving from Nagoya.
 CALLING SEQUENCE:
	as cron job:
	bash --login -c "sync_ips_daily"

	interactive:
	sync_ips_daily [--with-correlations]
		- gets new data from Nagoya site
		- if no new data available
			- only update all images
			  on the website (correlation graphs will only
			  be updated if --with-correlations is set)
		- if new data are available
			- run corotating tomography
			- run time-depenent tomography
			- update images on website (incl. correlations)
			- update moview on website

	Several of the components can be run individually:

	sync_ips_daily --download-only
		checks Nagoya site for new IPS data and downloads
		if new IPS data are available

	sync_ips_daily --tomography
		runs the tomography (both corotating and time-dependent,
		and updates the all images (incl. correlations) and
		movies on the IPS website

	sync_ips_daily --image-update [--with-correlations]
		updates all images on the website. The correlations graphs
		are updated only if --with-correlations is set too

	sync_ips_daily --movie-update
		updates all movies on the website.
		
 INPUTS:
 OPTIONAL INPUT PARAMETERS:
	--tomography	if set the script jumps straight to running
			the tomography programs, bypassing the download
			of new daily IPS files.
	--download-only	if set the script just runs the download script
			and does not run the tomography programs

	--image-update	only updates images on IPS website
			(incl. correlation graphs if --with-correlations
			is set.
	--with-correlations
			updates correlation graphs, if set in conjunction
			with --image-updates

	--movie-update	only updates movies on IPS website

	--dry-run	a 'dry-run' currently will run the tomography
			in $TUB, and suppress copying of images and html
			files to the IPS website.

	We have available both NSO/NOAA and WSO/NOAA magnetic field data.
	These are added to the processing by defining the env vars BB_PREFIXES,
	BB_PREFIXES_SLOW and BB_PREFIXES_FAST.
	To use the same magnetic data for both corotating and time-dependent
	tomography used BB_PREFIXES:
		BB_PREFIXES='wson'		only WSO/NOAA data
		BB_PREFIXES='nson'		only NSO/NOAA data
		BB_PREFIXES='wson nson'		both WSO/NOAA and NSO/NOAA
	To use different magnetic fields for corotating and time-dependent
	tomography set the pair BB_PREFIXES_SLOW and BB_PREFIXES_FAST to the
	desired magnetic data.
 CALLS:
	ipsd, ipsdt, run_marker, run_mean, sync_ips_mirror [1], sync_ips_mirror [2]
	vox_update
 PROCEDURE:
	Finally one or more tomography programs are called to update the
	latest tomographic reconstructions.

	People on the honcho list receive several email notifications
	when a part of the script completes.
	Multiple email addresses need to be separated by a space.

	The IDL display software differentiates between corotating
	and time-dependent tomography output files by checking the 
	'marker' value. This value is coded in the filename, and is present
	in the header under the key 'Rotation counter'.

	For a corotating file the marker value in the header MUST be zero;
	and the marker is NOT encoded in the filename.

	For a time-dependent file the marker value identifies all files
	associated with the same tomography run. It has a value of 1 or higher.
	The marker is coded in the filename as a 5-digit integer.
 MODIFICATION HISTORY:
	JAN-2001, Paul Hick (UCSD/CASS)
	SEP-2001, Paul Hick (UCSD/CASS)
		Added honcho email list
	AUG-2002, Paul Hick (UCSD/CASS)
		Removed explicit calls to bash startup scripts
		/etc/profile and ~/.bash_profile.
	OCT-2002, Paul Hick (UCSD/CASS)
		Rewrite of download section. Now uses Perl script 'mirror'
		Also added the -tomography keyword.
	OCT-2002, Paul Hick (UCSD/CASS)
		The script does not abort anymore when the corotating tomography
		fails, but continues with the time-dependent program.
		All image updates are now done after both tomography programs
		have been run. The intermediate update of the corotating images
		prior to running the time-dependent tomography has been dropped.
	JUN-2003, Paul Hick (UCSD/CASS)
		Added calls to gzip to compress all nv3d*, etc., both raw
		and final versions.
	JUL-2003, Paul Hick (UCSD/CASS)
		Removed call to 'idl run_map'
	NOV-2003, Paul Hick (UCSD/CASS)
		Removed compression of final files in this script using a single
		gzip -f after the call to run_marker or run_mean. Instead this is now
		done by the IDL procedure vu_write on a file by file basis.
		The gzip -f call caused problems for the hourly forecast run while
		run_marker was creating new final files. For some time the old
		(gzipped) final files would coexist with the new (unzipped) final
		files (until gzip -f was completed). An hourly forecast run during
		this time would find multiple files referring to the same time.
		The results are blank maps at the hourly forecast time.
	AUG-2004, Paul Hick (UCSD/CASS)
		Modified selection of magnetic field data by env variables.
	APR-2011, John Clover (UCSD/CASS)
		Modified to use ipstd20n_intel and output/process nv3h files.
	APR-2013, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Substantial cleanup. More documentation.
		Switched to long options.
   SEP-2017, Hsiu-Shan Yu (UCSD/CASS0
       Updated v17_intel_static, magnetic filed 3 components (mag3)


sync_ips_email $SMEI/ucsd/gen/python/sync_ips_email.py
[Previous] [Next]
 NAME:
	sync_ips_email
 PURPOSE:
	To determine if there is new IPS data by checking for new email
 OUTPUT:
	Returns a 1 if an IPS email was found, 0 if not
 CALLED BY:
	sync_ips_mirror [1], sync_ips_mirror [2]
 RESTRICTIONS:
	To be able to replace /var/spool/mail/username, the account running
	this script needs to have write acces to the file. The mailbox is already
	owned by 'username' but is in group 'mail'. The easiest way to provide
	write access seems to be to make 'username' a member of group 'mail', in
	addition to the group 'users'.
 CALLS:
	Time2Time, tiny.run_cmd
 PROCEDURE:
	Incoming email is put in /var/spool/mail/username
	Pine moves email from /var/spool/mail/username to $HOME/mbox
	We check both mailboxes for IPS email from Nagoya.

	Both mailboxes are scanned for email with "stelab.nagoya-u.ac.jp" in the
	message ID and "VLIST_UCSD_" in the body. All such emails are deleted,
	and a status of 1 is returned.

	Because PortableUnixMailbox library does not seem to have a function for
	deleting we have to explicitly rewrite the mailbox to remove the IPS emails
	once they have been detected.

	If no IPS email is found in a mailbox then the mailbox remains unmodified.
	If one or more IPS emails are found, then all non-IPS emails are accumulated
	in a temporary mailbox file. The temporary file (which does not contain
	the IPS emails) replaces the old mailbox file after completion.

	The only kludge used is the addition of a phony From_ line to the start of
	each email written to the temporary mail box file. There does not seem to be
	a way to extract this line from the mailbox file itself, so we had to fake
	one (if it is not there, pine will not recognize it as a valid mailbox).
 MODIFICATION HISTORY:
	SEP-2003, Austin Duncan
	OCT-2003, Paul Hick (UCSD/CASS)
		Added check to make sure the mailbox file exists.
		Added processing of $HOME/mbox in addition to /var/spool/mail/username
	NOV-2003, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
		Added the email time converted from JST to UT to output log.


sync_ips_mirror [1] $SMEI/com/linux/sync_ips_mirror
[Previous] [Next]
 NAME:
	sync_ips_mirror
 PURPOSE:
 CALLING SEQUENCE:
	sync_ips_mirror [$1 $2]
 OPTIONAL INPUTS:
	$1	list of email addresses
	$2	if equal to '-mirror' then the data from Nagoya are
		mirrored even if there is no email alert.
 OUTPUTS:
	$?	return status
		0: run tomography
		1: no need to run tomography, because
			- no new email arrived, or
			- no new daily files downloaded, or
			- email confirmation failed, or
			- merging of daily file with yearly files failed
	File $TUB/daily_ips.txt stores output from the sorting program 'dailyips'.
 CALLS:
	daily_ips, mirror, run_glevel, sync_ips_email
 CALLED BY:
	sync_ips_daily [1], sync_ips_daily [2], sync_ips_daily [3]
 RESTRICTIONS:
	Needs the Perl script 'mirror'. Currently the version included
	with SolarSoft is used.
 SIDE EFFECTS:
	The content of all new daily IPS files are emailed to everyone
	on the $1 list of email addresses.
 PROCEDURE:
	First the script sync_ips_email is called to check whether a new email
	has arrived from Nagoya indicating the availability of new IPS data.
	If no new email has arrived, return status=1.

	If a new email has arrived, then the new IPS data are downloaded from
	stesun5.stelab.nagoya-u.ac.jp, and are stored in $NAGOYA/ipsrt
	If no new data files ar downloaded, return status=1.

	Then the program $EXE/dailyips is called to add the new data to the
	yearly data files (stored in $NAGOYA/daily).
	If this integration fails, return status=1

	Then the idl program run_glevel.pro is run to calculate
	g-level values. These are added to the yearly files.
 MODIFICATION HISTORY:
	OCT-2002, Paul Hick (UCSD/CASS)
		Split off from sync_daily_ips
	OCT-2003, Paul Hick (UCSD/CASS; pphick@ucsd.ed)
		Replaced bash script sync_ips_email by Python script
		with same name
	APR-2011, John Clover (UCSD/CASS)
		Updated STELab FTP address


sync_ips_mirror [2] $SMEI/com/linux/sync_ips_mirror-only
[Previous] [Next]
 NAME:
	sync_ips_mirror
 PURPOSE:
 CALLING SEQUENCE:
	sync_daily_mirror [$1 $2]
 OPTIONAL INPUTS:
	$1		list of email addresses
	$2		if equal to '-mirror' then the the data from Nagoya are
			mirrored even if there is no email alert.
 OUTPUTS:
	$?		return status
			0: run tomography
			1: no need to run tomography, because
				no new email arrived, or
				no new daily files downloaded, or
				email confirmation failed, or
				merging of daily file with yearly files failed
	File $TUB/daily_ips.txt stores output from the sorting program 'dailyips'.
 CALLS:
	daily_ips, mirror, run_glevel, sync_ips_email
 CALLED BY:
	sync_ips_daily [1], sync_ips_daily [2], sync_ips_daily [3]
 RESTRICTIONS:
	Needs the Perl script 'mirror'. Currently the version included
	with SolarSoft is used.
 SIDE EFFECTS:
	The content of all new daily IPS files are emailed to everyone
	on the $1 list of email addresses.
 PROCEDURE:
	First the script sync_ips_email is called to check whether a new email
	has arrived from Nagoya indicating the availability of new IPS data.
	If no new email has arrived, return status=1.

	If a new email has arrived, then the new IPS data are downloaded from
	stesun5.stelab.nagoya-u.ac.jp, and are stored in $NAGOYA/ipsrt
	If no new data files ar downloaded, return status=1.

	Then the program $EXE/dailyips is called to add the new data to the
	yearly data files (stored in $NAGOYA/daily).
	If this integration fails, return status=1

	Then the idl program run_glevel.pro is run to calculate
	g-level values. These are added to the yearly files.
 MODIFICATION HISTORY:
	OCT-2002, Paul Hick (UCSD/CASS)
		Split off from sync_daily_ips
	OCT-2003, Paul Hick (UCSD/CASS; pphick@ucsd.ed)
		Replaced bash script sync_ips_email by Python script
		with same name
	APR-2011, John Clover (UCSD/CASS)
		Updated STELab FTP address


sync_ips_realtime [1] $SMEI/com/linux/sync_ips_realtime
[Previous] [Next]
 NAME: 
	sync_ips_realtime
 PURPOSE:
	Download latest realtime gvalues from STELab from
		ftp://ftp.stelab.nagoya-u.ac.jp/pub/vlist/rt/gvalue_rt1.dat, Currently this link is not working
        ftp://ftp.isee.nagoya-u.ac.jp/pub/vlist/rt/gvalue_rt1.dat, is the new link
	compare to daily file in $NAGOYA/daily and wget the file
	if it is different
 CALLING SEQUENCE:
	sync_ips_realtime <email-addresses>
 INPUTS:
	email-addresses		list of email addressed that will receive
						a listing of new data
 OUTPUTS:
	exit code:
	0	new data; run tomography
	2	no new data; no need to run tomography
	1	error downloading from Nagoya or other processing error
 PROCEDURE:
	New data points are collected in $NAGOYA/daily/nagoya.<year>

 Nagoya format:
 SOURCE   YRMNDY    UT DIST HLA  HLO GLA  GLO CARR    V  ER G-VALUE   RA(B1950) DC(B1950) RA(J2000) DC(J2000)
 1245-19  151231 21.00 0.98  -2   10  -6  285 2172 -999-999 1.17374    12 45 44 -19 42 58  12 48 23 -19 59 19

 Morelia format:
 Date     MidObsUT Dur. Site Freq BW Source   Size  RA-J2000 Dec-J2000 Limb  Dist.   Lat.   PA   Elong   Vel.  V-err g-value g-err Method     Vel.  V-err g-value g-err Method
 20151231 20:59:52  2.7 STEL  327 10 1245-19  -999  12 48 23 -19 59 19   W   208.15  -6.3  249.2  80.9   -999   -999   -999   -999 3-St. CC   -999   -999  1.174  0.212 1-St. PS

 MODIFICATION HISTORY:
	APR-2013, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
	    Fixed bug: source list not calculated for new yearly file


sync_ips_realtime [2] $SMEI/com/linux/sync_ips_realtime.bak
[Previous] [Next]
 NAME: 
	sync_ips_realtime
 PURPOSE:
	Download latest realtime gvalues from STELab from
		ftp://ftp.stelab.nagoya-u.ac.jp/pub/vlist/rt/gvalue_rt1.dat, Currently this link is not working
        ftp://ftp.isee.nagoya-u.ac.jp/pub/vlist/rt/gvalue_rt1.dat, is the new link
	compare to daily file in $NAGOYA/daily and wget the file
	if it is different
 CALLING SEQUENCE:
	sync_ips_realtime <email-addresses>
 INPUTS:
	email-addresses		list of email addressed that will receive
						a listing of new data
 OUTPUTS:
	exit code:
	0	new data; run tomography
	2	no new data; no need to run tomography
	1	error downloading from Nagoya or other processing error
 PROCEDURE:
	New data points are collected in $NAGOYA/daily/nagoya.<year>

 Nagoya format:
 SOURCE   YRMNDY    UT DIST HLA  HLO GLA  GLO CARR    V  ER G-VALUE   RA(B1950) DC(B1950) RA(J2000) DC(J2000)
 1245-19  151231 21.00 0.98  -2   10  -6  285 2172 -999-999 1.17374    12 45 44 -19 42 58  12 48 23 -19 59 19

 Morelia format:
 Date     MidObsUT Dur. Site Freq BW Source   Size  RA-J2000 Dec-J2000 Limb  Dist.   Lat.   PA   Elong   Vel.  V-err g-value g-err Method     Vel.  V-err g-value g-err Method
 20151231 20:59:52  2.7 STEL  327 10 1245-19  -999  12 48 23 -19 59 19   W   208.15  -6.3  249.2  80.9   -999   -999   -999   -999 3-St. CC   -999   -999  1.174  0.212 1-St. PS

 MODIFICATION HISTORY:
	APR-2013, Paul Hick (UCSD/CASS; pphick@ucsd.edu)
	    Fixed bug: source list not calculated for new yearly file