ipsd2020 nagoya=nagoya,/home/bjackson/dat/nagoya/yearly --> ./ipstd_20n_inp_intel nagoya=nagoya,,yearly ./ipstd_20n_inp_intel nagoya=nagoya,,yearly, ace=$DAT/insitu/acesw_????.hravg ./ipstd_20n_inp_mag_mod_intel nagoya=nagoya,,yearly, nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss As of 11/8/2012 The following work: ./ipstd_20n_inp_mag_intel nagoya=nagoya,,yearly, ace=$DAT/insitu/acesw_[4].hravg nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ./ipstd_20n_inp_mag_intel nagoya=nagoya,,yearly, swace=$DAT/insitu/swace_[4].hravg nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ./ipstd_20n_inp_mag_intel nagoya=nagoya,,yearly, wind=$DAT/insitu/wind_swe/????_WIND_hourly_averages nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ./ipstd_20n_inp_mag_intel nagoya=nagoya,,yearly, celias=$DAT/insitu/celias_[4].hravg nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ./ipstd_20n_inp_mag_intel nagoya=nagoya,,daily, celias=$DAT/insitu/realcelias/celias_realtime.hravg* nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ! (for real time celias data analysis) (works for forecasts) ./ipstd_20n_inp_mag_g_intel gen=nagoya[4],~/ wind=$DAT/insitu/wind_swe/????_WIND_hourly_averages nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss (works if nagoya.2013 or nagoya.2014 is in /home/bjackson/) ./ipstd_20n_inp_mag_g_intel gen=nagoya,,daily ace=$DAT/insitu/acesw_[4].hravg nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss (works for forecasts) Revisions: On the week of 11/13/00 I began modifyfing both mkvmaptdN.for and MKDMAPTDN.for using the mk_ipsd2020NN. I modified the IPSDTDV2020NN.f program to use the above programs with scratch arrays placed in the calling sequence. When I did this I noticed that the scratch arrays originally used were not dimensioned properly for the 2020 program. The scratch arrays zeroed at the beginning of the program were not fully zeroed, and this was an error. When this was done properly,the 2020 program no longer converged with few holes - there were a lot of holes in both velocity and density and the results that I had come to like and agree with were no longer correct. I presume that the increased number of line of sight crossings from non-zeroed arrays were partially responsible for the previous OK results (That always seemed to be consistantly the same for any given Carrington map.) I consequently got the idea to filter the weights and the number of line of sight crossings in both time and space so that these are more consistent for only a few lines of sight in a way similar to the answers. Thus the weights and the numbers of line crossings are now smoothed spatially and temporally with the same filters as the answers. This seems to work wonderfully, and allows the line of sight crossing numbers to be placed lower than before - to as low as 1.5. At 1.5 most of the Carrington map is filled and the program proceeds to convergence (on Carrington Map 1965) pretty well. At 11/16/00, the comparisons with in-situ work almost as well as before, but time will tell. In an additional change on 11/15/00, I modified mkvmaptdN.for and MKDMAPTDN.for to increase the effective number of line of sight crossings by sqrt(1/cos(latitude)) This allows the acceptance of a lower number of line of sight crossings near the poles since the spatial filtering is greater there. This also seems to work. On 11/15/00 I also modified fillmaptN.for and copyvtovdN.for to accept scratch arrays through the calling sequence. IPSDTDV2020NN.f also needs to be modified to accept these subroutines with their new calling sequences. The convergence with the above modifications (11/16/00) seemed pretty ragged until sources were thrown out after which the convergence proceeded in a very stable fashion. The peak in density at 7/15 was absent in the density data, and this may be because of thrown sources. On 11/16/00 I then also smoothed FIXM spatially and temporally in both mkvmaptdN.for and MKDMAPTDN.for. This also converged in a ragged fashon even after the thrown sources and in the end did not write the 3 D arrays I asked - maybe memory was clobbered. The program did converge. however. The in-situ time series was not at all like the model!!! On 11/16/00 I then changed back to earlier convergence scheme where the FIXM is not smoothed. The 3 D arrays still are not being written out. On 11/16/00 I noticed that the mkvmodeltd.for routine was not as on the vax [.ipsd2020.normn}subdirectory, but was an older version. I re-wrote the newer version (which does not need a scratch file) and replaced mkvmodeltd.for with the newer version. I also checked to see that the MKGMODELTDN.for subroutine was unchanged from the vax and I revised it so that it passes two scratch arrays through its calling sequence. The newer version with iterates identically to the old version. There seems to be no change in the write3b status throughout the iterations indicating that nothing gets clobbered during the iterations. On 11/21/00 I fixed the fillwholet.for subroutine so that now there is now a scratch array is passed through its calling sequence so that the above error noticed on 11/16/00 was fixed. Things seem pretty much the same when I do this, and now the t3d files are output. On 11/21/00 I stopped using distributed weights, since the program NOT using distributed weights seems to converge as well as the program version that does use distributed weights. The EA answers are somewhat different, but not much. On 12/5/00 I found an error in the write3dinfotd1N.for subroutine in forecast mode. The subroutine seems to give two 3d files that are incorrectly labeled (and interpolated) because AnewI is wrong. I believe AnewI is wrong because the input N values are wrong in the forecast mode. The problem was in the use of XCintF where there was not an N+1 copy of the XCintGG array into the XCintF one in the main program. In the forecast mode this caused the bomb. However, I see that whatever reason that there was a forecast mode for the write3dinfotd1N.for routine, it does not exist any more. I have therefore eliminated the forecast mode for write3dinfotd1N.for in both the main program and in the subroutine so that now all that is needed is a single call to write3dinfotd1N.for. On 12/7/00 I found an error in the MKTIMESVD.for routine in that the NmidHR value was subtracted from the time to start the beginning day. In the main program, this value was labeled number of hours "before" midnight and was -10 for Nagoya. The main program now reads number of hours "from" midnight and is now -10 as before for Nagoya. UCSD is +8 or +9, as you know, depending on daylight savings time. The MKTIMESVD.for subroutine now adds this value to begin the day at midnight. This has the effect of changing all the extensions of the t3d files, since the temporal intervals now begin at a different time - midnight at Nagoya. If the t3d files are interpolated by 1 in write3dinfo1N.for, this now has the effect of dividing the day into times before midday in Nagoya and after midday in Nagoya. If the files are interpolated by 2 in write3dinfo1N.for, then the t3d files are divided (approximately) into midnight to 8am, 8am to 4pm and 4pm to midnight. The extension values have been checked by PL HE Pandora on the vax. On 12/7/00 I found that the forecast 51874 run, which terminates data at this time, or 12 UT on November 25 (355 sources found), gives the last matrix at 1970.064 (centered at 1970.0825 or 2 UT November 26). The forecast run at 51874.5 (0 UT November 26) (371 sources found) gives the last matrix at 1970.064 as well. Since this does not give even a one day 3d matrix advance forecast, I have now changed the value of NTV and NTG by one more so that the 3d matrix is written to a file for at least one day beyond the current time. On 12/7/00 I found that in the forecast runs there were as many sources used as there were source values within the NLG and NLV increments set by XCintG and XcintV. I fixed this in the main program so that now all the data that is used in the forecast mode comes from times that are less than the Carrington rotation of the Earth at the time given input as the forecast time. The current mk_ipsd2020NN uses the IPSDTDV2020NN.f main program. On 01/30/01 I copied all the programs of the mk_ipsd2020NN compilation over to the for/IPSd2020NN subdirectory so that this program and subroutine is now complete and seperate from the other fortran programs. On 01/30/01 I also began to alter the FIXMODELTDN.for and mkvmodeltd.for subroutines so that they better reproduce in-situ velocities. I have done this by copying all the files to the for/IPSd2020 subdirectory so that this program and subroutine is now complete and seperate from the other fortran programs. I renamed the IPSDTDV2020NN.f program to IPSD2020.for When I ran the program for 1965 in this subdirectory, the results for 16., 16., 0.25, 0.25 and 0.40 for the EA_FILE idl run_compareaceLp run was 0.771 and 0.066 and this is slightly different from before which was 0.688 and 0.058. I don't know why there is this slight difference. On 01/30/01 I began a lot of changes in mkvmodeltd.for in order to get better velocities into the matrix. I think the original (or perhaps the model 01/31/01B) is the correct one for weighting, but I tried a lot of things to see if there could be an improvement in the in-situ comparison. There was not a whole lot of difference when using the things I tried, below. C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! Original C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I)*VSN ! Added B. Jackson 01/30/01 C VWT = VWT + VWTij(J,I) C VPERP = VPERP + VWTij(J,I)*VELO C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I)*VSN ! Old run long ago. C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01A C VWT = VWT + VWTij(J,I)*VSN C VWTi = VWTi + VWTij(J,I)*VSN C VWTij(J,I) = VWTij(J,I)*VSN C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01B Seemed ~best so far C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I)*VSN*VELO C VWTij(J,I) = VWTij(J,I)*VSN*VELO C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 02/01/01A C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I)/VSN C VWTij(J,I) = VWTij(J,I)/VSN C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 02/01/01B (Original) C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I) C VPERP = VPERP + SQRT(VWTij(J,I))*VSN*VELO ! 02/01/01C C VWT = VWT + SQRT(VWTij(J,I)) C VWTi = VWTi + SQRT(VWTij(J,I))*VSN*VELO C VWTij(J,I) = SQRT(VWTij(J,I))*VSN*VELO VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01B Seemed ~best so far VWT = VWT + VWTij(J,I) VWTi = VWTi + VWTij(J,I)*VSN*VELO VWTij(J,I) = VWTij(J,I)*VSN*VELO C VPERP = VPERP + (VWTij(J,I)**2)*VSN*VELO ! 02/02/01A C VWT = VWT + (VWTij(J,I)**2) C VWTi = VWTi + (VWTij(J,I)**2)*VSN*VELO C VWTij(J,I) = (VWTij(J,I)**2)*VSN*VELO VW = VWTij(J,I)*VSN*VELO ! 01/31/01B rewritten VPERP = VPERP + VW VWT = VWT + VWTij(J,I) VWTi = VWTi + VW VWTij(J,I) = VW Thus, I will settle on the version above. All other versions of the program should incoporate this weighting which essentially places all the line of sight variations into the weight. The nominal 16., 16., .65, .65, .25, .25, .4, run of 1965 gives 0.647546 and 0.229140 for the density and velocity correlations for the restricted data set and ACE in-situ measurements around the time of the July 14 CME peak. Other combinations of parameters give higher correlations, but none give the same density values in-situ/model with 16. and .65 as do these. The run of velocity deconvolution alone (2/12/01) does not allow the parameters to be set. This is now fixed in the main program (2/12/01). The version of the program that deconvolves velocity alone (both constant density and that uses mv^2 = constant) bombs in gridsphere with bad VMAP values before any iterations are gotten to. I have now fixed this and also fixed the problem when G-level alone is selected. The problem was in the setup for each initial velocity or density array. On 2/14/01 the velocity mv^2 works. On 2/14/01 the density mv^2 does NOT work to give velocity. Thus, I had better fix the Mk_D2V subroutine. On 2/15/01 I re-did a lot of the program to accomodate the modes that use a constant velocity and density and the mv^2 assumptions plus a write-out of the DMAPHOLE and VMAPHOLE data. These work now and have been checked using the routine to converge on both density and velocity, on velocity using constant density and on velocity using mv^2. The latest runs give the nominal using 16., 16., .65, .65, .25, .25, .4, for 1965 of 0.666897 and 0.417346. I think the "better" correlations may happen because fillwholeT is no longer double, but I am not sure of this. The correlations look considerably different now, too. Thus, to check I redefined using fillwholeT as before, and when I did this the nominal for 1965 looked the same as it has in the past in the iterative phase at least up to iteration 14 or so where the two began to diverge ever so slightly in the percentages, only. The correlations were: 0.647546 and 0.229140 which were identical to before. I thus returned the fillwholeT routines so that the times are no longer double-smoothed and I began a search for more "nominal" parameters. I also got a version of MK_DTV.FOR from the CASS01 fortran files and I renamed it MK_DTVN.for and compiled this version in the IPSd2020 subdirectory. The new version now works for mv^2 = constant for density iterations and gives approximately correct velocities. The new version also works for mv^2 = constant for velocity iterations and gives approximately correct densities. Oh, yes, another innovation is that the mv^2 = constant is now determined for each iteration with no convergence problems as far as I can tell. I now also determine the equatorial density and velocity (subroutines Deneq3.for and Veleq3.for so that the mv^2 = constant is now correctly applied to the other nominal value. In other words, the average speed at the solar equator is used to determine the mv^2 constant to produce density when speed is modeled. Likewise, the average density at the solar equator is used to determine the mv^2 constant when density is modeled. The nominal 16., 16., .65, .65, .25, .25, .4, for 1965 gives 0.666897 and 0.417346. Velocity convergence: Constant density and the nominal for 1965 above gives -0.123838 and 0.0811669 (The density above looks in the correct range, but it just is not constant. This is because the Mkshiftd rountine allows non-constant interactions to build up at height as long as velocities are non-constant.) Also, mv^2 density and the nominal for 1965 gives: -0.0242271 and 0.115706. End eq. vel = 334.436707 Density convergence: Constant velocity and the nominal for 1965 above gives 0.347806 and NaN Also, mv^2 velocity and the nominal for 1965 above gives 0.347806 and 0.318303. End eq. den = ? (Something goes wrong when this last is done in that the t3d file doesn't seem to write out OK. Also, it isn't clear why the correlation with velocity = constant and velocity with density given by mv^2 is does not give a different density correlation. Seems to me that it should.) On 2/16/01 I think I found the error at least in the last problem. The DMAP is not being updated each time the mv^2 is done. DMAP is consistently being held constant. Also VMAP is the same. On 3/15/01 I compiled the ipsd2020.for program with latitude values equal to 10 and ran the program on rotation 1965 with script compka2020script to check for a better convergence than with the old version of the program. The program works better and while the answers are almost the same in this version as in the test version of the program, inputs 15, .55, .25, .30 for 1965 for the old program give 0.840 and 0.230 and 0.844 and 0.230 there is an ever so slight difference On 3/19/01 I copied the test version of the ipsd2020.for program over as the official version of ipsd2020.for, and am running scripts with it as tests. To do this I needed to modify the ips2020inputm.f program in two places. This version of ipsd2020.for has better inputs and outputs for files read from the disk. On 4/2/01 a new include file was used for MAPRADIAL.H This new map radial is far more simple and uses equal intervals up form the source surface. No longer is level 10, 1 AU anymore, though. Paul claims this works better in his IDL programs. It does change the correlations a lot so that the original 15, .55, .25, .30 for 1965 gives 0.646 and 0.344 now. On 4/3/01 I replaced the CopyVtoVDN and CopyDtoDVN subroutines if nTG .eq. nTV and XCbegG(1,1) .eq. XCbeV(1,1) with a simple call to ArrR4Copy(nLngLatnTG,*,*) replacement. This allows the tomography to run faster and, in addition, the velocity and density tomography is more independent for the two 3D matricies, velocity and density. I am now running scripts to see if I can recover the same inputs as before. The above did not work too well because the Copy programs actually smooth over time significantly, and I couldn't get back the original results. I settled on placing fixes in the writes to the EA files and to the t3d files. On 4/10/01 MKDMAPTDN.for was modified to use a smoothed gridsphere to limit what is and isn't included in the crossed component limit (the gridsphere mode in /IPSHTD/MKDMAPTDN.for is now 3 rather than 4 in the versions of MKDMAPTDN.for in other versions of the main program). On 4/10/01 mkvmaptdN.for was also modified to use a smoothed gridsphere to limit what is and isn't included in the crossed component limit (the gridsphere mode in /IPSHTD/mkvmaptdN.for is now 3 rather than 4 in the versions of mkvmaptdN.for in other versions of the main program). This changes things so that now I need to again re-run the script to get the best fit to the CR1965. On 4/18/01 or thereabouts, I added two versions of Paul's write3d_info.for subroutine to the main ips2020.for program. On 4/23/01 I added Paul's adjustjdcar.for subroutine to the ipsd2020 program. This may change things a little, and since the scripts are still running, I plan to wait to find a better value for the parameters. On 1/18/02 I changed the variable 25.38 in subroutine MKTimesvd and both EXTRACTD's to 27.275. On 5/20/02 I began the process of making the ipsd2020 program accept a three-component xcshift parameter that includes latitude as well as time shifts. This involves modifications to the main program ipsd2020.f and mkshiftdn.f mkpostd.f extractd.f extractd3d.f write3d_infotd.f write3d_infod3d.f mkveltd_check.f get4Dval.f To do this a value of Rconst needs to be added to the parameter outputs of mktimesvd. This constant is then added to the inputs of mkshiftdn.f extractd.f to make them work in an accurate form. On 5/20/02 I modified mkpostd.for to incorporate the complete latitude change. This also takes a modification in the main program to incorporate this projected latitude. In addition, modifications need to be made to the calls to include this projected latitude (XLAT-->XLproj) Get4dval MkVmodeltd MkGModeltdn This is done to incorporate the new projected values of latitude that are already handled correctly in these routines. On 5/21/02 the version of the program that was transferred to CASS185 is the same as the version on both the /transfer and TIPSd2020 subdirectories with the exception that the transfer version does not do magnetic fields while the programs that Paul and Tamsen install are undergoing change. The version of the program on CASS183 in the main directory however, does deal with magnetic fields in the old way. Both the CASS183 and CASS185 versions give identical EA and E3 files using default parameters. The timing of the spike for the Bastille-day CME is late in the model compared to the in-situ response. Something funny happened, though, when the program was transferred, because the source numbers and the output values changed considerably and gave somewhat different answers in both the transferred and non-transferred version when it was run. On 5/22/02 I tried the sim_MOD routine in the ipsd2020 program to see if this makes a difference in the location/position of the peak for the Bastille-day CME. This run did not work. On 5/23/02 I discovered that I had inadvertently placed the new latitude shift in Get4Dval as a projected rather than a line of sight variable. There is no difference in current analyses, but will be in the future. This has now been changed in both the transfer and regular versions. I also checked to be sure that all the Get4Dval calls have time first in the calling seguence, since this is out of sequence from the index values. They all do. On 5/23/02 I tried the sim_MOD routine in the ipsd2020 program to see if this makes a difference in the location/position of the peak for the Bastille-day CME. To do this I have had to modify the mkshiftdn.f routine to include nCar, JDCar, NCoff and FALLOFF in the subroutine calling sequence. This change is not in the transfer version. On 5/23/02 I discovered an error in Get4Dval. It did not work with the Get4dval(3,....) set. This now corrected, I hope. On 5/24/02 I found that the sim_MOD routine would not work because arrays were not dimensioned properly in sim_Mod and shift_MOD. I fixed these now so that sim_MOD works. However, on 5/28/02 I found that the answers given by sim_MOD with default parameters using ipsd2020 did not work to give good answers even though the routine seems to work in ipshtd77. On 5/29/02 I discovered the problem with write3d_infotd3D.f. The bottom density map array written to disk that was copied into the file to be output was being multiplied by a file that was set to near bad values. This gave a write error to disk for a non-standard file. This is now fixed. The dummy values of VM and DM that are written in the subroutine need only have two dimension, and this has now been fixed in the main program and subroutine. On about 7/1/02 I replaced the fillmapols Xmid value entry because it was incorrect and I also placed fillmapols in the nv3f file prepare analysis. On about 7/11/2002 I changed mktimesvd.f so that the times begin about half a rotation earlier than they did prior to this date. I believe these changes are currently transferred to the cass185 computer version of the ipsdt program (10/31/02). On 8/16/02 I changed FillmapOLS so that it is now more automatic and so it has two modes. The first mode is as before to stich together the non-bad points at the point in the sphere the furthermost from Earth. As far as I can tell, the early version worked OK. The second mode limits the regions of the maps at the most distant longitudes from Earth so that these regions do not become too great (or small) in either density or velocity according to some limit. There is also a limit set on the angular distance that this effect can influence - now set at 135 degrees from the Earth. After several runs where there were problems with the program, on 8/17/02, I think I fixed this so that the program now works. On 10/30/02 I copied ipsdt.f to the transfer directory and recompiled it on that directory. The version of mkshiftn.f on this directory does not have several inputs that are present in the parent directory (see my note on 5/23/02) The recompiled version of ipsdt.f now (10/31/02) runs and has the same numbers of sources as the ipsd2020 version on the cass183 computer. There are differences still unaccounted for at the zeroth iteration time step for the first the g-convergence. The program runs to iteration completion (10/31/02). Late 10/31/02 I got the ipshdt program down from CASS185 and got oit to run and give the exact same iterative valuse as the ipsd2020 program I have on CASS183. The probelm was with fillmapols.f that had not been updated on the transfer subdirectory on CASS183. The ipsdt.f program now dies at the end, I expect from the updates in the extract routine. The IDL program on CASS183 does not now produce correct answers from the EA file generated using the current version of ipshdt. This is because the answers in the EA file are entirely different from what they should be, including negative velocities. I expect I have incorrect versions of the EXTRACT routines in the CASS183 library so that they now give bad answers in the EA files. Since I need to speak to Paul to fix this, I better wait til he comes in. On 6/2/03 I made a subdirectory TIPSd2020new so that I could take the latest version of my ipsd2020 and modify time in it to reflect real times and not rotations. On 6/2/03 the main problem in the program is that time is currently confused with rotation, and this is continued in the use of the include files. What is needed for the time-dependent tomography is to make time just that - probably in days from some beginning start time day several days prior to the main observation (as is currently done in terms of a solar rotation). The only time that rotation should come in is in mkshift where the rotation rate at a specific time needs to be used to get back to the surface map location at a given time. In this instance the actual rotation rate should be used for each time in order to correctly account for the variation of the solar rotation rate at any given time of the year. The files that contain these include routines in TIPSd2020new are: File : copyvtovdn.f *** File : extractd3d.f *** File : extractd.f *** File : fillmaptn.f *** File : get3dtval.f *** File : get4dval.f *** File : mkpostd.f *** File : mkshiftdn.f *** File : mkshiftdnn.f File : write3d_infotd3d.f File : mkvmodeltd.f (added 11/7/03 when mkvmodeltd.f did not work on transfer) File : mkgmodeltdn.f (added 11/7/03 when mkgmodeltdn.f did not work on transfer) On 6/3/03 I modified mkpostd.f so that it now calls ECLIPTIC_HELIOGRAPHIC8 and is now supposed to use the time of the LOS in DOY and fraction. The DOY is not yet input in double precision, but now it can be, and the time interval from the beginning should be OK as single precision to time precisions of about a minute. On 6/4/03 I modified mktimesvd.f to a version mktimes.f so that now the values needed for the above routines are available. I also modified the version of mkshiftdn.f to now not use ReConst and to use the now OK version of the dXCL and dXCT. There is now no RconstG or ReconstV. This changes the inputs to mkshiftdn.f By 6/5/03 I had gotten all the above to compile using the newest values of real*8 in time in Doy8, and it compiles and runs. On 9/4/03 I modified extractd.f so that it now correctly outputs doy8 and hour. On 9/5/03 I found that the mkshiftn.f routine did not handle the longitude shifts correctly in this version of the time dependent tomography. I thus implemented shifts in longitude in the corotating frame such that longitde shifts in terms of Carrington rotation are used that reflect the slow-fast variation of the Sun below the Earth and whatever given time of the year is chosen for the analysis. This bug most probably was present in the tomography since I implemented the three-element xcshift parameter shift matrix on 5/20/02. On 9/8/03 the program now runs and extractd.f produces an EA file from the data. The EA files still do not correlate well with the real data, so maybe there is yet another error in the shift matrix. On 9/9/03 in thinking over the problem I have come to the conclusion that the time of the observation sets the location on the Sun for the outward material flow and that the rotation of the Sun for the material observed at that time is simply the inertial speed of 25.38 days. Thus the location of that material on the solar surface is simply determined by the inertial speed. Thus, the version of the program before 1/18/02 was more correct and since that time the program has been in error. This is in keeping with the problem I found where before that time the program seemed more stable. I checked the shift matrix, and now both the flight time for the material and the shift in longitude are exactly the same as should be the case for radially outward-flowing material. On 9/9/03 something is still not right in that Nagoya noons are at 3 UT and this is now where the time intervals begin and end. However, the xcint intervals that should begin and end at local midnight now also seem to begin and end at local noon, and this is not correct. On 9/10/03 I tink I have proved to myself that the problem on 9/9/-3 did not exist. I then began work on the bomb in the first write3d_info.f routine and in the early afternoon found the bug - the calling list had an extra comma. This fixed, I then got another problem in write3d_info3d.f fixed and then another bug in extract3d.f fixed and finally the program runs through and gives reasonable answers with the default inputs. On 9/10/03 the default inputs are: begin ht -15 G-level space = 13.0 velocity space = 13.0 G-level temporal = 0.7 velocity temporal = 0.7 G-level power d = 0.65 G-level power v = 0.65 radial g fall gd = 0.15 radial g fall vd = 0.15 For the above ulimited 1964.6 gives EA - -0.231 -0.040, E3 - -0.260 0.062.. The limited gives EA - 0.036, 0.029, E3 - -0.016, 0.124 Back on 4/2/01 after changes in mapradial were made the values 15, .55, .25, .30 for 1965 gave 0.646 and 0.344. I'll now try these same parameters again. For the these ulimited 1964.6 gives EA - -0.164 -0.027, E3 - -0.233 0.061.. The limited gives EA - 0.122, 0.058, E3 - -0,057, 0.007 On 9/16/03 I got the script running again in order to determine the extent to which the settable parameters (filters, etc.) changed the analysis. The program is now very stable to changing filter parameters. However, the correlations never seem to be very good or at least never seem to give large peaks in the model, and in particular the large peak on 7/15/03 in the ACE data is not reproduced well in either the velocity or density models. On 9/16/03 I note that the way of determining vratio and dratio in mkshiftdn.f is different from the way it was done back in 03/20/01 in the IPSd1010 subroutine mkshiftd.f The way back then was to use the ratio between levels and accumulate this rather than using the current scheme of using the shift to the base level as is done now. On 9/18/03 I first tried the most recent version of the tomography program with the newest (cass185, I hope) IPS sources. I finally won! The tomography program now gives back the peak in the 1965 rotation that was present back in 4/2/01! Not only that, the peak is present in the model no matter whether the source surface is at 15 Rs or 1 Rs and no matter where the rotation begins - some differences are present when runs are made. The EA files give nearly the same correlations as do the E3 files interpolated from the 4D matrix in this case with only minor differences when 3 intermediate steps in the model are interpolated. On 9/19/03 I got the scripts running and was able to determine the best parameters - not much different from the ones found back in 4/2/01. The parameters 15, 0.75, 0.30 and 0.25 give EA correlations from the limited time series of 0.610 and 0.793 for rotation 1964.6 The density peak in the model does not go as high as the ACE data and is displaced somewhat right of the ACE peak. The velocity correlation reproduces the two velocity peaks and the dip between the events almost as in the best previlou correlations. The default parameters now work to provide the above results. Fidgiting the parameters to 15 0.75 0.25, 0.30 gives a larger density model peak with a slightly better density correlation for EA of 0.801, but a lower velocity correlation of 0.419. On 9/22/03 the results using scripts on CR 2003 (the May 29, 2003 CME) show that the parameters 16, 0.75, 0.30, and 0.25 fit best with 10-day limited E3 correlations of 0.253 and 0.551. The parameters 15, 0.75 0.30, and 0.25 (as above) give correlations of 0.088 and 0.481 The density peaks associated with the event are reproduced, but the dip between peaks is not as pronounced as it should be and the peaks are not as high as they should be. In this case, setting the parameters to 15, 0.75, 0.25 and 0.30 does not help increase the peak heights much, and does not make the correlations better. Although both ACE and model velocities are high (around 600 km/s), the low velocity correlation during the 10-day time interval comes from a peak observed in the ACE data that is not reproduced in the model. ** Thus the conclusion that I reach is that the parameters 15, 0.75, 0.30 and 0.25 are adequate values to use and that both Kenichi's technique and Tokumaru's (that was used to determine the 1964.6 correlation using that data set) give approximately the same result. Thus, I have now given Tamsen the task of placing the time-dependent magnetic field analysis into the new version of the program and checking to make sure the correlations work the smae on CASS183. On 10/1/03 with Paul's help, I discovered the problem that did not allow the wso files to be printed out. The problem was an incorrect bbwrite library file. With these files printed out correctly we placed the new version of the time dependent tomography on the Web on 10/2/03. On 10/6/03 we found the files on the Web were not updating correctly and the problem was being caused by mktimes not working correctly. The xctbeg and xctend times were being set as if they were divided by 48 rather than by 47 in the TIMECOORDINATES.H include file. xctbeg and xctend are now changed so that the beginning and end times in days are now given correctly. Mktimes was changed and checked. I am currently checking to see that the version of the program on CASS185 works the same as on cass183. On 11/7/03 I got the program working with double precision source times input. The subroutines revised to do this were: ipsd2020.f mkpostd.f readvipsn8.f writem_scomp8.f readgips.f (Will need work if Cambridge data are ever read again and this was NOT done.) The answers for the ipsd2020 program have changed only very slightly On 6/11/04 I changed subroutines mkgmodeltdn.f, and mkvmodeltd.f so that real*8 XCtbeg and XCtend are carried through the subroutine and into the Get3dtval subroutine. On 8/4/04 I found an error in Mkshiftdn.f Ever since times were placed in terms of days the search for shifts should have gone to earlier times but instead went to later times at the lower level. This was normally not a problem since the time normally began at the same value as the lower level and this was within the range of the time step from one level to the other when the steps were large. Slow velocities might have made an error here. This is now fixed. On 07/03/08 I began modifying the current version of the ipsdt file on cass185 to incorporate the newest changes. The current ipsdt program (named ipsd2020 re-named ipstd_0) has an update version of it of 3/22/07. On 07/03/08: The statements if(NLV.eq.0) bVcon = .FALSE. and if(NLG.eq.0) bGcon = .FALSE. were added before their respective mktimes. All of the readvispn have been changed to readvipsn8 and the outputs of the data files have been modified to include DOYSGG8 and DOYSsave8, and iReadOotyn8, iProcessOotyn8, and iReadOotyn8, iProcessOotyn8 in the main program. This requires the readvipsn8 program to be the one in smeiipstd, and to be changed itself in readvipsn8 subroutine. On 07/03/08 I also modified the extractn.f program so that it now allows an input of the names appended to each output file. This is needed to allow the main program to add new objects. I also included a querry at the beginning of the program asking which of these bodies you want to have an input parameter file for. The objects so far are: character cPrefix (NTP)*2 /'ME','VE','EA','MA','JU','SA','UR','NE','PL','UL','H1','H2','SA','SB'/ character cPrefixf(NTP)*2 /'m1','v2','e3','m4','j5','s6','u7','n8','p9','u1','hA','hb','s1','s2'/ Stereo A and B do not have ephemerides available in Fortram yet. On 07/04/08 I have now revised and streamlined smeiipstd.f (smeiipstd0_NHC - No Helios or Cambridge), and in so doing I have slightly modified the readvispsn8.f subroutine. This needs to be installed into ipstd_0 now that more precision is required of the ipstd program to read in source observation times in double precision. On 07/04/08 I have changed to two values of DOYSG8 and DOYSV8. The two routines needed changing in the main program when this is done are: MkPostd.f, (its name remains the same), and writeM_Scomp to writeM_Scomp8 On 07/04/08 the main new innovations to the smeiipstd0 program to date (since early 2007) are the installment of an output that is totally filled, the inclusion of gridsphere smoothing over the poles, and the inclusion of the higher resolution interpolated outputs. These now need to be installed into this program. On 07/04/08 I placed gridsphere with 0.0 in the last place rather than 90.0. I also have installed the mkdmaptdn0 and mkshiftd0 subroutines into the main program. On 07/04/08 all is installed into the main program and subroutines. There is still a bit more to do since the program asks are not yet complete. On 07/05/08 I discovered that the extractpositionn8.f subroutine was at the end of the extract.f subroutine. When I dropped the extractd.f subroutine from the mk_smeiipstd0 calling sequence, the extractpositionn8.f subroutine could no longer be found. I have now placed the extractpositionn8.f subroutine at the end of the extractdn.f subroutine. On 07/05/08 I downloaded the readgips.f subroutine from the Web, and I produced a version readgips8.f. With this last, the ipstd_0 main program now compiles in the directory ipstd_0 off SMEIIPSTD on new.cass183. On 07/05/08 the ipstd_0 program ran until a segmentation fault in mkvmaptdn.f because I forgot to remove the DMAPV in the calling sequence of the subroutine in the main program. I had a very difficult time trying to get the program to run without a segmentation fault, and I do not know why. On 07/05/08 I discovered that the mkvmaptdn.f subroutine also had a gridsphere2D in it that needed to have its last parameter set to 0.0 rather than to 90.0. I fixed this, renamed mkvmaptdn.f to mkvmaptdn0.f and recompiled ipstd_0. On 07/06/08 the ipstd_0 program still bombed at the end after all interations with a segmentation fault. I discovered that the version of copydtov and copyvtod did not include a scratch array about three arrays from the end. This is now fixed everywhere in the ipstd_0 program. On 07/06/08 I discovered an error in calling extractdn.f; FALLOFF and NinterD were reversed in the first of the two calls to extractdn.f On 07/06/08 I remembered that the r^-2 needs to be established for the current version of the write3Dinfo in ipstd_0 (as well as in the extractn.f) routine, but that this must not be present in the current versions of the high resolution writes to the the disk. Therefore, before and after each high resolution write I now remove the r^-2 falloff, and then put it back. On ~07/18/08 there are several changes to the ipstd_0 program. This program is split into two - ipstd_10, and ipstd_20 The former of these programs runs at 10 x 10 deg resolution and defaults to a half-day cadence in order to take the most advantage of the more abundant Ooty data. The latter is the same old routine that runs in 20 x 20 deg and one-day cadence mode for the normal STELab data. At about this same time, I installed the higher-resolution version of write3d_info3DM_HR.f into these programs and I also placed a write and the analysis copied from the smeiipstd0 program that outputs completely-filled "nv3o" files. This now works and gives better-looking nv3 files from the analyses, and completely filled files from the IPS time-dependent tomography. The high-resolution "nv3h" write now defaults a file that goes from 0-3 AU and this file for each time is 81 Megabytes for each file. There have been several additional cosmetic changes to the program to make it more user-friendly. These involve the inputs to the program and when the user chooses Ooty analysis in density this same is also chosen for density. In addition, the User now chooses whether to output a number of different files from the Ulysses or STEREO, for instance when these data are available at in the program at it's end. On 07/28/08, because the higher-resolution "nv3h" files often show higher-resolution features that are present in the kinematic model, I have modified the extractdn.f program to output STEREO data. This required a new subroutine, stereoorbit.f This new subroutine contains the current STEREO ephemeris from NASA. The stereoorbits.f program was discovered not to work, and to not agree with the runs made when STEREO locations were extracted in using Paul's IDL program. The reason for this was found in the subroutine kepler_orbits called by the stereoorbits.f program. There was a different definition of some of the orbital parameters in this latter subroutine. Paul fixed this, but it was also discovered that the Fortran version of ulyssesorbit.f has always been wrong, and so that if ever this was used it would not give the correct location for Ulysses. This is now fixed in both ulyssesorbits.f and stereoorbits.f and checked to agree both with the IDL routines, and for STEREO with the NASA ephemeris. The ulyeeseorbits.f subroutine is only strictly ood for the first Ulysses solar pass, and needs to be revised to include better parameters on subsequent ulysses passes including the current pass. The first attempts to provide STEREO data subtractions have been made using the incomplete IPS data from STELab, and the ipstd_20 program. This shows that this stereoorbits.f subroutine appears to work. More checks are currently being run. On 07/29/08 I discovered and error in the ipstd_0 (ipstd_210 and ipstd_20) programs. When the program was run without producing "nv3h" or "nv3o" files, the extractdn.f program at the program end output densities that were far too great (the Falloff had not been removed in the earlier data sets). This is now fixed, and the extractdn.f subroutine now gives proper densities whether or not "nv3o" files are produced. On about 08/01/08, I found that John had modified the readvips routine to incorrectly read Ooty data. I modified the readvips8.f program to read Mano's current data set correctly. This provided really good correlations of velocity with Mano's Ooty data set. This new routine was installed into the program. On about 08/15/08 I modified the readvips8.f and main program and added two subroutines to the main called writegoodsourceg.f and writegoodsourcev.f. On request, the program now writes out two files that duplicate the input files in every way except that the lines of sight the ipstd_10n.f program thinks bad are flagged bad. These output files can then be read by IDL routines that place lines of sight on skymaps. On about 08/18/08, Mano's sent new Ooty data (from 2007) with a different format. I modified the readvips8.f routine to read Mano's new Ooty data set format. The program still outputs the good source Ooty files with the old format for pickup by the IDL routines. These new Ooty data obtained at solar minimum do not give the same good velocities as do the data set from 2004. On about 08/23/08 Mario discovered that the program bombed when it was asked to provide Ulysses files. This caused a rewrite of the imput to the extract routine to: character cPrefix (NTP)*2 /'ME','VE','EA','MA','JU','SA','UR','NE','PL','Ul','H1','H2','Sa','Sb'/ character cPrefixf(NTP)*2 /'m1','v2','e3','m4','j5','s6','u7','n8','p9','u1','hA','hb','s1','s2'/ On about 09/08/08 I added parts to the main program and to mkdmaptdn0.f and mkvmaptdn0.f (now called mkdmaptnd0n.f and mkvmaptnd0n.f) to on request write out er1_ and er2_ files that show 3D confidence levels that can be imaged similar to nv3 files. The er1_ files contain the composite line of sight crossings that are used to determine (or not) that the region has beed deconvolved. This array was always brought out of the two above subroutines, and now it is used as input to the write3d_infotd3Dm.f routines. The write3d_infotd3Dm.f routine needed to be modified to not allow the normal density and velocity with distance factor to be multiplied to the density and velocity as is done for density and velocity. Before use the array values are modified to reflect the Gaussian temporal and spatial filters used. The er2_ files are the composite weights on the source surface, and these were never output from the mkdmaptdn0.f and mkvmaptdn0.f routines. These arrays contain information not only about the weighting functions, but also about the densities and velocities along the line of sight used to weight the source surface. They look not only something like the line of sight crossings, but also like the densities themselves. Before use these array values too are modified to reflect the Gaussian temporal and spatial filters used. On 01/02/09 an error was discovered in the data that provides the reconstructions. The lines of sight on the Carrington map seem shifted by about 120 degrees from what they should be. We do not know when or where this error crept into the program but suspect that perhaps the changes to the read routines may be the cause on 08/01/08 when the readvips8.f routine was modified. The current version being compiled is named readvipsnn8.f and called as readvipsn8.f. On 01/02/09 I found an error in the mkpostd.f subroutine. This would only give an error in the case of a year crossing. When a year was crossed the Doy8 used in ECLIPTIC_HELIOGRAPHIC8 would not have been correct. The value of tXC was corrected for a year crossing correctly as long as Adaybeg is correctly set as the total number of days in the preceeding year. This is now fixed in the mkpostd.f routine. On 01/02/09 I found an error in the main ipstd_20n.f program. In this program the value of Idaybeg that goes into the Mkpostd subroutine was never set. The variable Idaybeg (that comes out of the mktimes.f subroutine) was never initialized to the value from the mktimes.f routine that is either Idaybegg or Idaybegv. The resulting input of "0" in the mkpostd.f subroutine should only have been wrong when a year end was crossed, and thus only wrong when Ooty data is used that crosses one year. However, I also note that the way that Idaybeg is checked in mkpostd means that a "0" in this location rather than a 365 or 366 for the first 180 days of the year would have been interpreted as a year end crossing causing the condition to be set that was incorrect for the middle of the year. Because of the error (mentioned directly above) in the mkpostd.f subroutine, this would have made an error in the projected lines of sight. I note that in the smeiipstd program Idaybeg is initalized as it goes into the mkpostd.f subroutine. On ~03/24/09 Mario Bisi, on having Paul Hick print out the lines of sight, it was discovered that there was a problem in the newest version of ipstd_20n, in producing these lines of sight. This was traced over a week's time to using only the velocity only option using EISCAT data, and a IdaybegG rather than IdaybegV in the mkpostd routine. On ~5/11/09 I completed preliminary work begun about 4/10/09 on the version of the program that includes ACE level 0 data into the EISCAT analysis. This included modifications to the following routines: ipstd_20n.f --> ipstd_20n_in.f writelosmapvg_in.f mklosweightsm.f aips_wtf.f readace8.f mkpostd_in.f mkvmodeltd_in.f mkvmaptdn0n_in.f fixmodeltdn.f The current program is locked to use only acesw_2007.hravg data available in the same directory as the program is run. This happens in readace8.f. On 10/05/09 I tried the program working on new.cass183.ucsd with Nagoya data in 2007. Seems to work on rotation 2061. so far. These runs have formed the basis for the paper in Solar Physics in press in 2010 using CR 2061 data. On 4/15/10, I began to modify this program to run using ACE level 0 density data as well as velocity data. This change requires modifications to the programs: ipstd_20n_in.f --> ipstd_20n_inp.f mkgmodeltd_in.f mkdmaptdn0n_in.f On 4/16/10 I discovered that the XCtbeg and XCtend were not defined real*8 in mkvmodeltd_in.f. This is now corrected in mkvmodeltd_in.f. However,this same holds true for mkvmodeltd.f and thus may need to be fixed there. On 4/20/10 I was able to get ipstd_20n_inp.f running and use the in-situ ACE data to converge to the time dependent solution. To do this I used the default rotation 2061 (as in the SP article submitted in 2009) to run using STELab data. This required modifications to the two subroutines listed above, but mostly the differences are in the ipstd_20n_inp.f main program. On 4/20/10 I noticed that the ipstd_20n_in.f program is incorrect. The loop beginning: do KKK=1,NLmax+nTG ! Loop before deconvolution to set up VMAP, DMAP ends with NLG and NLV being decreased in size. I now have named new variables that no longer decrease NLG and NLV in size. These are: NLVBEG, NLGBEG, NLVEND, and NLGEND. To show which lines of sight not to use I now flag these bad in NBSG and NBSV. This error must not have affected ipstd_20n_in.f, but it potentially could have given problems. On 4/20/10 other changes to the ipstd_20n program include: Queries to whether or not ACE densities are considered. Querries to whether or not the densities are used to modify the base (1 AU) density. Changes to determine a value of GOBS in terms of g-level from the density in the main program before mkdmaptdn0n_in.f. A change to use the mean value of the density to change the value of the density at 1 AU and to point out this change if it is made. On ~4/25/11 the readace8.f subroutine that was revised to readace8m.f was included in this program. This new version of the subroutine is now supposed to be able to read two consecutive years of data if required, and to allow the program to set the year of the data file. This same routine version was provided to read ACE level 2 data (swace-20xx_hravg) readace2_8.f, and Wind data (20xx_WIND_hourly_averages) readwind8.f, and CELIAS data (celias_20xx.hravg) readcelias8.f The main program was revised to accomodate these new subroutines and to test them with the default being the ace level 0 data read. The subroutines as yet still must have the data in the same subdirectory where the program is run. On 4/25/11 I now use a Make file in this subdirectory on Bender that allows either an ipstd_20n_inp or an ipstd_10n_inp_rec main program to be compiled. This new Make file seems to provide intel-compiled main programs OK. On 4/25/11 the modifications made included those to the main program to write out a more concise statement of values that are produced in mktimes.f. On 4/25/11 I also added the calls to the main program for the new version of the mkshiftn0n.f subroutine that will now work in case a higher-resolution version of the main program is used (such as ipstd_10n.f). This subroutine is now called with the "speed" parameter input in the calling sequence. On 4/26/11 I put in the smeiipstd0n version of write3d_info3dM_HR.f in the main program. These arguments have to do with the limits of the error matrix on the high resolution files. The ipstd10n program now includes this newer version of the write3d_info3dM_HR.f subroutine. To fix this problem required that I modify the main program to include these new arguments and to initialize some of them with questions asked in the main program. The program now works and writes high-resolution files in the main program ipstd_20n_inp. On 4/26/11 I now ask the person running the program to what height (rather than to what number level) he wishes to output the high resolution files. These are defaulted to from 1 to level 31 for nMaps as before. On 4/13/11 I noted that on 3/4/11 I looked carefully at the 3D reconstructed ecliptic and density cuts for the two above resolutions. There are significant differences. The same structures are present near Earth generally. Both 3D reconstructions show the structures to move outward from the sun with the same speed. The data in the higher-resolution files seem to make more sense in some ways since there are more density loops distinguished in the higher resolution files, and less structure corotating in the hemisphere opposite the Earth. I guess this will need further study to see if the in-situ at Earth is better served with the higher-resolution reconstructions. On 4/26/11 I provided a way to limit the tomography if too few lines of sight are present in a rotation. This limit is set to 150 lines of sight. The program now asks you this limit, and then if this limit is not reached for either velocity or density, the program does not attempt to reconstruct the data using this option. If there are too few lines of sight for both then the programs stops and tell you you need to lower the limit (or quit). I haven't tested this programming yet. On 4/26/11 I changed the high-resolution file write so that the files written in forecast mode stop after the midpoint plus 0.2 of a CR is reached following the forecast time. This is the same as in the smeiipstd program now. On 4/26/11 I incorporated all the changed extract routines in the main program. This included, extractdn_rec.f extractd3d.f extractdn.f extractdn_intel.f The error was found because the extract routine seemed not to work well with CR 1964.6 when the extract routine was used just following the first real extract analysis extractd3d.f in the main program. This subroutine worked OK and always did. However for this routine when the longitude of the observer changes by more than half a CR, the non-fixed subroutine would not correctly make the extraction if this jump occurred at the very beginning of the extraction (for some specific times). Rotation CR 1964.6 was one such rotation, and the beginning of the extraction was one such time. The effect was to provide an observer location that was incorrect by a whole Carrington rotation for the extractdn.f routines. This problem will need fixing in all the extract routines, by replacing the statement C if(I.gt.-NTT) then ! Other times with if(ObsXCL.gt.-99.) then and initalizing ObsXCL = -100.0 earlier in the program On 4/27/11 I found and fixed a major bug in the mkgmodeltd_in.f subroutine. The values of GM2(NL+N_IN) and GMWTi(NL+N_IN) were only dimensioned NL. The arrays here would surely have been overwritten. On 4/27/11 I also discovered and fixed a second major bug in the mkgmodeltd_in.f subroutine. Every so often the model density went negative in the program in the latter part where the model was to produce a model value. This caused the value of GM2(I) to become NaN, and this is now fixed by limiting the densities obtained in the model to positive (but very small values) in the latter part of the program as in the main program. On 4/27/11 I also found a major error in the main program. Just prior to the call to the fixmodeltdn.f subroutine, and following the call to the mkgmodeltd_in.f subroutine, the observed density values GOBS(I) were sometimes a bad value without NBSG(I) being set to zero. This occurred only in the analysis beyoind NLG observations. This is now fixed by placing a procedure in the main program that set the value of NBSG(I) = 0 for these values. On 4/27/11 I also found and fixed the reason that the SIG parameter sometimes became NaN in the fixmodeltdn.f subroutine. Every so often the value of density became so low that in the fixmodeltdn.f subroutine FIXR(I) became too low for SIG to be calculated. The limits to: if (R.gt.10.0) R = 10.0 ! Fix in 4/27/2011 BVJ if (R.lt.0.1) R = 0.1 are now in place in the proper location to solve this long-time problem. On 4/27/11 this program now runs through and gives answers. There still seems to be a "bug" in the program, however. The extract routine that produces the e3 tomography density file gives density that goes from 0.0 to 0.015. The density looks to peak in tomography IDL on July 15 at 0 UT. The peak in the file is at DOY 197 (July 15) at 06 UT at 0.016. The value on DOY 197 0UT is 0.014. This density peak for the Bastille-day CME is timed about as it is supposed to be. The velocity magnitude for this same e3 file seems OK, but the minimum of the tomography speed is located on 07/11 about 4 days earlier than it is supposed to be at about 300 km/s. In the e3 file this minimum is placed at 316 km/s at DOY 193 (July 11) 12 UT. The ACE l0 data shows this minimum at about July 15 at 0 UT. The subroutine that produces this is called the extractdn.f (2nd Extractdn) subroutine. The E3 ACE L0 agreement for density has a tomographic peak at 07/14.5 to about 90 particles and the E3 file confirms this peak on DOY 196 18 UT and 197 0 UT to 87 and 81 particles respectively. Other peaks in this file seem about right. The E3 ACE L0 tomography velocity has a minimum at 07/11 like the e3 file, and is confirmed in the E3 file at 296 km/s on DOY 193 (July 11) at 6 UT. This file is produced by the Extractd3D.f (Before Extractd3D) subroutine. The Ea ACE L0 density tomography peaks at 07/15 6 UT to 119 particles. The IDL plot seems to be at 0 UT and is at exactly the same time as at ACE L0, but the L0 peak goes to only about 47 particles. The Ea Velocity minimum is again at 316 km/s on DOY 193 at 12 UT 4 days earlier than at ACE. The extract routine that produces this is the extractdn.f (first extract) subroutine. On 4/27/11 I think I found the reason the e3 file was too low - one falloff too many before the last extract routine. On 4/27/11 I also found and fixed the reason that the SIG parameter sometimes became NaN in the fixmodeltdn.f subroutine. Every so often the value of density became so low that in the fixmodeltdn.f subroutine FIXR(I) became too low for SIG to be calculated. The limits to: if (R.gt.10.0) R = 10.0 ! Fix in 4/27/2011 BVJ if (R.lt.0.1) R = 0.1 are now in place in the proper location to solve this long-time problem. This program now runs through and gives answers. On 4/28/11 I found another error in that just before the FixModeltdn subroutine for both the velocities and the densities, a limit placed on the the model was activited incorrectly. The line read: if(WTSV_in(I-N_INV).eq.0.and.I.gt.NLV) then and should have read: if(WTSV_in(I-NLV).eq.0.and.I.gt.NLV) then This made the program bomb sometimes, but the limit should surely not been there. On 4/28/11 I found that when the extract routine is used, it limits the final time cutoff to the time XCtendG. Since this is the current time, no forecasts are possible using the extract routines. I have now fixed this by adding a small 4-day value to the current time but only when the forecast mode is evoked. This seems to give proper values beyond the current time. On 4/30/11 I changed the main program to default the PWG and PWGG density proxy to 0.20 (from 0.35) this works better with both the current IPS proxy and the ACE level zero density conversion to give a better fit (more of a one-to-one correlation) to the IPS-derived densities. I suspect this is a value that somewhat unique to the lower than average ACE level 0 density excursions, and the day averages that are used in the time series to compare the IPS tomography with the averaged ACE level 0 data. On 4/30/11 and 5/1/11 I was able to modify the main program to decrease the g-level to fit the ACE Level zero data better. This results in a lower density level than the nominal and is probably special for the ACE level 0 data where the mean value and lowest densities are lower than in any of the other space experiments including the density data from ACE Level 2 analysis. There is now an option in the main program to provide this innovation. The option works by determining the mean value of the IPS level from the density conversion from the in-situ data following the change of the mean density. The mean IPS level for the interval is supposed to be 1.00. If it is not, the new mean is used as the IPS G-level mean and the IPS g-values less than the original value of 1.00 are altered to provide this new mean g-level. This works pretty-well and is iterative in that it allows the current tomography itself to provide a best fit to the mean g-level. A better way would be to decrease the total in-situ density over the interval to provide a more accurate tomography fit to the arbitrary density, even if this lower density value is unrealistic. As yet no process has been built to iteratively give a best fit using this idea. On 5/2/11 and slightly later, I changed a few items in the extract routine to hopefully forecast densities (and velocities) more in advance in a better way. The main changes are near mktimes so that the times produced extrapolate into the future as much as possible in extractdn.f in forecast mode. This was done by changing NTV and NTG before mktimes.f when the system is in forecast mode. This is being tested in the IPS forecasts from Toyokawa. Perhaps we should not extrapolate this much, but time will tell. On 5/18/11 there was an error discovered in the main program. The primary error was caused by the error limit in the main program asking if asking "Do you want the density error to limit HR densities?$no',bDdener" bDdener was set correctly but bVdener was set to .TRUE. and never accessed. When the error files were not used bWrerr set to .FALSE. this caused the bVdener in the high resolution writes to severly limit the volumetric data. This is now fixed in the main program and the production of good limit files is now still made even if bWrerr is set to .FALSE. I also modified the high resolution write program to initialize the values of DENER and VENER in the the write3d_infotd3dM_HR.f subroutine. On 5/18/11 I modified the main program so that in forecast mode the values of NTV and NTG to: if(bForecast) then ! For forecast mode NTV = nint((float(NTV) + 3.0)/2.0) + 7 NTG = nint((float(NTG) + 3.0)/2.0) + 7 end if The old version of the program severly limited these values (to two days earlier) in the high resolution write to a little over a day from the current run time. These values were 29 and are now set to 31, and so the forecast now goes a little over 3 days into the future. On 6/22/11 I fixed an error in mktimes that produced a value of nT in forecast mode. When the input value of nT was smaller than the default, then the value set was what had been calculated outside the subroutine. When this was an odd number the program passed this number on to the outside. This number needs to be even to work to give values timed according to the exact temporal intervals in the extract routines. In any case, this is now fixed in the forecast and the same answer is available in the non-forecast version of the analysis. The value above of 31 did not work in the current scheme. On 7/31/2011 and a week earlier, I fixed the magnetic field output for the time-dependent routine. First, I added a new call to the main programipstd20n_in-situ/test_mag$ ./ipstd_20n_inp_mag_mod_intel nagoya=nagoya,,yearly, nso_noa=nso_ktpk[4]_[3].fts,$DAT,map,nso_ktpk,hcss called write3D_infotd3DMM(). This write output version of the program actually procudes no file output, but provides the velocity array needed to extrapolat the mpotential magnetic field outward. In this way, if the magnetic field is to be extrapolated, this write subroutine needs not be run ti produce output earlier in the program. The actual error in the program was a two-part problem. Paul produced the original programming. The program first died with a segmentation fault when magnetic field wwas asked for. The location for this was found in spiney.f. It was caused by the input array in the main program not being set large enough. This scratch array is called BR2DT, and should be increased to & BR2DT(nLng,nLat,4*nTmaxG), !scratch in the main program from BR2DT(nLng,nLat,nTmaxG). When this was done, the subroutine did not bomb, but the output was zero. The reason for this result was eventually found to be caused by an error in the Write3D_bbt.f subroutine. The erroe was in the call: Pss(3) = ArrR4Interpolate(nTim, TT3D(kT3D), TTi+XC3D( loc3D(iLng,iLat,iRad,iTim,3) )). The original array TT3D was begun at TT3D(iT3D) which began the interpolate routine in time following the location of the needed maps. This had no effect when the array began at that time, but in the current Write3D_bbt.f subroutine, a search is initiated at the beginning to determine the earliest good time, and the numbers of needed arrays and files is used from the location kT3D rather than iT3D in the current Write3D_bbt.f subroutine. I should add that the problem was fixed in Paul's on-line version of the Write3D_bbt.f subroutine, but not in the versions that I found on my bender computer. On 9/27/11 John found an error in writing out the SRCGV files in the subroutine writegoodsourcev.f. The write must include NLV (not NLG) as in current programming. This error is now fixed in this subroutine. There is still another problem in the writegoodsourcev.f program in that the line if(NBSV(I).eq.0) write(SRCGVN(50:53),'(I4)') IBADVOBSN ! Bad Nagoya velocities should read if(NBSV(I).eq.0) write(SRCGVN(50:54),'(I5)') IBADVOBSN ! Bad Nagoya velocities to write out the correct bad velocities values into the nvgoya.xxxx file at this location. On 9/27/11 John also discovered a similar error in the readace8m.f subroutine. The current line iVel = nint(VEL) should read iVel = int(VEL). Under this line if(DEN.eq.-9999.9) DEN = -9.99999 needs to be placed to provide a bad density value that fits into the gvalue space provided and output in subroutines in the main program. These subroutines fixes need to be carried through to the other than ACE level 0 read routines, and checked that the work with bad in-situ data. On 9/28/11 I made changes to the print out of the last sources values so that they now state that they are in-situ or IPS, etc. On 9/28/11 I added a subroutine to the main program that is called writegoodsourcegv.f. If NLV = NLG and N_ING = N_INV, then this subroutine outputs a single combined file of data called negoya.xxxx. On 9/28/11 and 9/29/11 I found a bug in the tomography program that trucated the analysis before last weights were provided for the sources at the end of the observing interval. This has the effect of not allowing the last IPS sources to be counted in the forecast analysis. The changes were The statement if(bVcon.and.XCEV(K).gt.XCintV(NTVV+2).and.XEV(K).le.XEhighV) then should read if(bVcon.and.(XCEV(K).gt.XCintV(NTVV+2).and.XEV(K).le.XEhighV).or.(K.ge.NLV)) then This now allows the last remote-sensing sources to be counted. I further put a condition on the following write statement so that no print was written if no sources are found in the interval. On 4/1/12 I fixed a couple year "end problems" that have come up (because of the year end). These corrections were put in place in the mag version of the program today. There was a "or" in the main program about the change of year and day that was set to an "and". This does not help. There was a L=NLV+1,I_INV that was absoutely incorrect, and John found this. The main problem causing the program to bomb was the fact that the VMAP array was not initialized. The SMEI tomography handles this by always providing a velocity array that starts with a constant 400 km/s velocity. In this program, the problem presented itself because there were absolutely no remotely-sensed velocity sources in the rotation. There is now a trigger that if always left at zero, because there are always bad values in all the VMAPs, a provision is made to provide initial constant velocities to start. This same fix was applied to the g-level data, if ever there are velocities and no g-levels. I also fixed the output statement about the numbers of good velocity values and good in-situ velocity values to be output separately so that now the few fixes provided that allow tomography runs with only in-situ data to proceed and to give the numbers of in-situ values. On 4/1/12 I also fixed the ACE read subroutines. These subroutines never were able to cross from one year to the next. They now do. They do this by checking iostat and when this is not 0 they loop back and open the next year file and add the data from this file up to the cut-off time specified. On 5/30/12 I worked to place all in-situ reads into the current program so that they could be input with a wildcard placed in the command line. This required placing a cwildace, etc. parameter into all of the in-situ data read routines, and typing these wildcards character*80 in the read and main program. The defaults are now the zshare directory as specified in the main program. On the command line one types one of the following, or here at UCSD simply does nothing, and leaves the command line blank. ace=$DAT/insitu/acesw_????.hravg ace2=$DAT/insitu/swace_????.hravg wind=$DAT/insitu/wind_swe/????_WIND_hourly_averages celias=$DAT/insitu/celias_????.hravg In addition, On 5/30/12, I worked to provide the second request (of density to default to the velocity version of the in-situ velocity parameter chosen - these in ipstd_20n_inp.f. These tested all seem to work at least through the second iteration. On 6/5/2012, I attempted to find out why the magnetic field for CR2059 has a spike in the data in the middle of the rotation. To do this I modified the writebbt.f program (writebbtm.f) to print out intermediate mag field values. I found the spike in the source surface maps on CR 2059 when I did this. This implies that the spike comes from the original source surface maps that are spline fit. I noticed that the magnetic field data for the rotation goes through to CR2059.0861, and then again picks up at CR2059.7846. This is a big gap, and it may be that the map on one or other end of the gap has bad values. On 6/10/2012, I fixed the cut-off beyond the input date to eliminate in-situ data points greater than that date in forecast mode in all the current ipstd programs ipstd_20n_inp, ipstd_20n_inp_mag, and ipstd_20n_inp_mag_mod. This did not work in forecast mode before this time. On 8/23/2012 I found an error in the input format for the wind data. The read skipped 68 rather than 58 values, and this let the density error rather than the density be input to the tomography program. With this fixed, I expect a far better result in the outcome when reading the Wind in-situ data as input. On 8/24/2012 I found that the ipstd programs all used bGLACE for fitting the mean value of the G-level, but that this same trigger bGACE was used as a trigger to use the in-situ densities in the main program near fixmodeltdn. I have now change this so that bFACE is the trigger in this location as appropriate. On 8/25/2012 I found an error and fixed the problem with the CELIAS density reads. There was an extra G on the end of XCEGG in the input read routine in the main program. The Wind density read is still not fixed even following a complete replacement of the input calling sequence. On 8/25/2012 I changed all the velocity read routines so that they now input floating velocities into the observed velocity arrays. Prior to this all velocity values were rounded to the nearest integer in the input read arrays. On 8/26/2012 I found the error in the Wind routine. I was reading the wrong parameter from the Wind data file. The correct density parameter requires a skip of 20 spaces following the velocity read. When this is done, the Wind correlations generally come up to better than 0.9. I am sorry these reads took so long to fix. On 10/12/2012, I modified the main programs on all of the ips routines to allow an output of Messenger data on or after 2007. The original programs limited the outputs to on or following 2005. On 11/07/2012 I fixed a small error in the main program that would have hindered in-situ reads of Ace level-2 data using a wildcard. On 11/08/2012 I was able to get the in-situ reads to allow a wildcard input such as: ace=$DAT/insitu/acesw_[4].hravg This is similar to the way John has gotten the program woirking in the past at the CCMC. All four data reads were checked On 11/08/2012 I was able to get the in-situ reads to go though the year end. All four data read inputs were checked. On 11/15/2012 I began modifying the ipstd_20n_inp_mag.f program to read MEXART data and to include source size in the input. I modified both aips_wtf.f and mklosweightsm.f and the main program to include a source size into the input. The resulting subroutines are called: aips_wtfm.f mklosweightsmm.f The call to mklosweightsmm is now included in the main program as: call MkLOSWeightsmm(NLOSWV,dLOSV,NLOSV,NTVD,SSV(NTVB),XEV(NTVB),DISTV(NTVB),WTSV(1,NTVB),PWRV) Here SSV(NTVB) is the new input source size value. In this first attempt, I initalize all the source sizes to zero at the beginning of the main program. The one MEXART velocity source size value each year in the main program is input specifically after the Nagoya read by: do I=1,NLV ! MEXART quick fix if(SRCV(I).eq.'3C48M ') then print *, 'MEXART SOURCE FOUND ', SRCV(I), ' I =',I SSV(I) = 0.25 end if end do The above implies that the source read is listed 3C48M, where the "M" is the unique identifier. The source size array is passed to mklosweightsmm.f which then passes it to aips_wtfm.f where it is used in the analysis. On 11/16/2012 I also modified the program readvips8.f and called it: readvips8m.f This allows a read of the Nagoya data using either a version that reads the data with the old IPS inputs, and can also read the IPS inputs from Nagoya with RA and DEC values. This is used to read the MEXART data for 3C48M using RA and DEC as the source location. Currently, the source needs to be placed in the proper time location within the Nagoya file. On 11/16-18/2012 the program: ipstd_20n_inp_mag_mex.f compiles with the Makefile and runs including the two MEXART 3C48M radio sources to date (in 2009 - CR 2082, and 2012 - CR 2123), and inserts these into the tomographic program properly. The two sources are accepted into the tomography, and have been used to provide IDL image outputs with the MEXART data included. On 12/19/2012 I was told by Fred Ipavich that CELIAS data are now available in real time as hour averages. Shortly thereafter I modified the ipstd_20n_inp_mag.f program to read these data by including two new subroutines readcelias8R_d.f and readcelias8R_v.f. On 12/31/2012 I discovered that the density read programs all limited velocity on the reads rather than density. This is now fixed in the ipstd_20n_inp_mag.f program and in all density reads. To do this required that the value of DMAX and DMIN be used (unlike presently) and that dlimitu be changed to DMAX dlimitl be changed to DMIN in the main program. On 12/31/2012 I was able to get the real time CELIAS analysis working by changing the readcelias8R_d.f and readcelias8R_v.f subroutines to remove a "*" from the end of the file. Before this there was a ???? appended. This now works with a $DAT wildcard on the command line call celias=$DAT/insitu/realcelias/celias_realtime.hravg* This is not too elegant, but at least this also now works in sync_ips_daily when the program in run on the web. The current real time celias does not allow other than a space following hravg* in the calling sequence. On 12/31/2012 I modified the main programs ipstd_20n_inp.f and ipstd_20n_inp_mag_mod.f programs so that they can use the CELIAS data (and all the modified in-situ read programs) correctly. On 12/31/2012 I modified the main programs ipstd_10n_inp.f and ipstd_10n_inp_mag_mod.f so that they can use the CELIAS data (and all the modified in-situ read programs) correctly. The ipstd_10n_inp_mag_mod.f program had a considerable number of overwrites from the editor, ugg. On 12/31/2013 I modified the main program ipstd_10n_inp_mag_mex.f so that it can use the CELIAS data (and all the modified in-situ read programs) correctly. On 08/25/2013 I began to modify the program ipstd_20n_inp_mag.f to run using a general input - from, for instance, KSWC, Ooty, or Nagoya data. I have termed this program ipstd_20n_inp_mag_g.f until I am able to get it to run with both Nagoya or KSWC data. On 08/29/2013 I found an error in the Ooty read routine in the conversion of RA and DEC to rLng and rLat for negative values of DEC. This error seemed present from the beginning and gives a rlat that can be incorrect by as much as a degree. This is now fixed. On 08/30/2013 I was able to get the program ipstd_20n_inp_mag_g.f to run with the newly-modified readvipsn8.f subroutine. The subroutine now has an ireadGen and an iprocessGen entry that allows both KSWC.XXXX or nagoya.XXXX data to be read using inputs of the RA and Dec in J1950 coordinates. The program works and converges using both data sets. On 08/30/2013 I decided to include a second readvipsn8.f subroutine in the main program named readvipsn8g.f This subroutine now hosts the read of different programs using the ireadGen and an iprocessGen entry calls. This works, but in doing this I found that the readvipsn8.f subroutine was really from the readvips8.f subroutine called that externally, but inside called readvipsn8.f. I modified the readvipsn8.f subroutine when I did so to provide the gen outputs. I hope this does not bother. On 08/30/2013 I produced a new subroutine precession_drive.f that places all of the precession in the gen part of readvipsn8g.f into a single subroutine. The precession_drive.f subroutine is more general than the version that was written in the readvipsn8g.f subroutine. The precession_drive.f subroutine allows RA and DEC precession from J1950 to present, or from J2000 to present, or J1950 to J2000 or finally from any date to any date just by inputting a mode switch. The subroutine was tested to work the same as the version in the readvipsn8g.f subroutine. An added output available is the precessed RA and DEC in character format. On 09/05/2013 I changed the value of nMaxT from 250 to 600 in the write3d_bbt.f and the write3d_bbtm.f subroutines. On 09/05/2013 I corrected the error in the Ooty read program in the readvips8.f subroutine (that is called as readvipsn8.f, for negative source declinations, and I recompiled all of the ipstd_10n subroutines. The recompiled ipstd_10n_inp_mag.f program no longer gives an error when the Ooty data are read without using in-situ parameters. The program also no loner bombs when it is asked to write out magnetic fields at the Ooty resolution. Thus, changing nMaxT from 250 to 600 in the write3d_bbt.f subroutine fixed the problem of obtaining magnetic field at the Ooty 10 x 10 degree resolution. The magnetic field writes at nv3h* resolutions is still in question. **** On 01/09/2014 there is a new standard format now available for the IPS analysis. Although preliminary, it is being implemented currently by STELab, and will hopefully be adoped by all other groups. I have begun to change the readvipsn8g.f routine to accomodate this new input. Whereas the readvipsn8g.f subroutine has been used to read STELab and is fit to read KSWC data, the current subroutine has been renamed to readvipsn8g1.f to distinguish it so that it reads the current KSWC data and the new Nagoya data format. The main program that uses this read ipstd_20n_inp_mag.f was changed to now be called ipstd_20n_inp_mag_g.f On 01/20/2014 I found that the program ipstd_20n_inp_mag_g.f that calls extractdnn.f and extracttdnnn.f does not go through the year end correctly, but does work when the new year begins in 2014. This happens on January 26th, but probably not before. This needs to be fixed at least before the next year. The extract subroutines do not work in the forecast mode before the 26 of January. The write_3D_infotd3D subroutines may not work correctly either. On 01/22/2014 I was able to fix the problem of the extractdnn.f program not going through the end of the year. I did this by re-writing a section of the extract code so that the check of the start and end dates works better. The fix that I produced needed the beginning year of the tomography run input to the subroutine. I have renamed this subroutine extractdvd.f to avoid confusion. The program still needs to be checked to be sure that other aspects of the extract program works. Also, the extracted time series now covers too much time in the forecast program and does not interpolate far-enough into the future for the ea time series. However, next I will probably find that the extractdnnn.f routine can be similarily fixed to work correctly and this will forecast farther into the future. On 01/22/2014 following the above fix, I noted that the year crossing still gives a one-day shift to the time series. I found the wau to fix this in the mkpostd_in routine where a day was subtracted from the second year time. It was a matter of whether the second day begins at 0 and counts 1 at the end of the day or if the count begins at 1 with the times in hour added to this. In the call DATE_DOY(0,nYr,cMon,iM,iD,iDoy) subroutine the conversion provides a count that begins at 0 and ends at 1 at the end of the day. The difficulty has existed ever since the IPS tomography analysis was re-programmed many years ago. I expect that the SMEI tomography is incorrect also, but need to check. On 01/23/2014 I fixed the extractdnnn.f routine by the same procedure that was used in fix extractdnn.f. The extractdnnn.f subroutine is now renamed to extractdvdm.f, and has a year of the beginning of the tomography input in the calling sequence. Both of the two extract routines have been tested to assure that they give the same forecaast time series output time series lengths as before. Also, the standard time series tested seems to give close to the exact same result over the same time period when run in archival mode.