ipsd2020 nagoya=nagoya,/home/bjackson/dat/nagoya/yearly --> ./ipstd_20n_inp_intel nagoya=nagoya,,yearly ./ipstd_20n_inp_intel nagoya=nagoya,,yearly, ace=$DAT/insitu/acesw_????.hravg ./ipstd_20n_inp_mag_mod_intel nagoya=nagoya,,yearly, nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss As of 11/8/2012 The following work: ./ipstd_20n_inp_mag_intel nagoya=nagoya,,yearly, ace=$DAT/insitu/acesw_[4].hravg nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ./ipstd_20n_inp_mag_intel nagoya=nagoya,,yearly, swace=$DAT/insitu/swace_[4].hravg nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ./ipstd_20n_inp_mag_intel nagoya=nagoya,,yearly, wind=$DAT/insitu/wind_swe/????_WIND_hourly_averages nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ./ipstd_20n_inp_mag_intel nagoya=nagoya,,yearly, celias=$DAT/insitu/celias_[4].hravg nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ./ipstd_20n_inp_mag_intel nagoya=nagoya,,daily, celias=$DAT/insitu/realcelias/celias_realtime.hravg* nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss ! (for real time celias data analysis) (works for forecasts) ./ipstd_20n_inp_mag_g_intel gen=nagoya[4],~/ wind=$DAT/insitu/wind_swe/????_WIND_hourly_averages nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss (works if nagoya.2013 or nagoya.2014 is in /home/bjackson/) ./ipstd_20n_inp_mag_g_intel gen=nagoya,,daily ace=$DAT/insitu/acesw_[4].hravg nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss (works for forecasts) ./ipstd_20n_inp_mag_g_intel gen=KSWC,,yearly, wind=$DAT/insitu/wind_swe/????_WIND_hourly_averages nso_noaa=nso_ktpk[4]_[3].fts,$DAT/map/nso_ktpk/hcss (seemed to work) ./ipstd_20n_inp_mag_v14c_intel gen=KSWC.[4],~/ (worked to attempt to read data from KSWC.2014 if this file was in /home/bjackson/) ./ipstd_20n_inp_mag_v14d_intel gen=nagoya,~/ gen=KSWC,~/ nso_noaa=$DAT/map/nso_ktpk/hcss/nso_ktpk[4]_[3].fts (works for more than one data set read into the tomography program ./ipstd_20n_inp_mag_v14d_intel) ./ipstd_10n_inp_mag_v14e_intel nagoya=nagoya,,daily, nso_noaa=./nso_nsp_[4]_[3].fts (Temporarily works for nso_nsp_[4]_[3] maps in the same directory as the executable is run.) Revisions: On the week of 11/13/00 I began modifyfing both mkvmaptdN.for and MKDMAPTDN.for using the mk_ipsd2020NN. I modified the IPSDTDV2020NN.f program to use the above programs with scratch arrays placed in the calling sequence. When I did this I noticed that the scratch arrays originally used were not dimensioned properly for the 2020 program. The scratch arrays zeroed at the beginning of the program were not fully zeroed, and this was an error. When this was done properly,the 2020 program no longer converged with few holes - there were a lot of holes in both velocity and density and the results that I had come to like and agree with were no longer correct. I presume that the increased number of line of sight crossings from non-zeroed arrays were partially responsible for the previous OK results (That always seemed to be consistantly the same for any given Carrington map.) I consequently got the idea to filter the weights and the number of line of sight crossings in both time and space so that these are more consistent for only a few lines of sight in a way similar to the answers. Thus the weights and the numbers of line crossings are now smoothed spatially and temporally with the same filters as the answers. This seems to work wonderfully, and allows the line of sight crossing numbers to be placed lower than before - to as low as 1.5. At 1.5 most of the Carrington map is filled and the program proceeds to convergence (on Carrington Map 1965) pretty well. At 11/16/00, the comparisons with in-situ work almost as well as before, but time will tell. In an additional change on 11/15/00, I modified mkvmaptdN.for and MKDMAPTDN.for to increase the effective number of line of sight crossings by sqrt(1/cos(latitude)) This allows the acceptance of a lower number of line of sight crossings near the poles since the spatial filtering is greater there. This also seems to work. On 11/15/00 I also modified fillmaptN.for and copyvtovdN.for to accept scratch arrays through the calling sequence. IPSDTDV2020NN.f also needs to be modified to accept these subroutines with their new calling sequences. The convergence with the above modifications (11/16/00) seemed pretty ragged until sources were thrown out after which the convergence proceeded in a very stable fashion. The peak in density at 7/15 was absent in the density data, and this may be because of thrown sources. On 11/16/00 I then also smoothed FIXM spatially and temporally in both mkvmaptdN.for and MKDMAPTDN.for. This also converged in a ragged fashon even after the thrown sources and in the end did not write the 3 D arrays I asked - maybe memory was clobbered. The program did converge. however. The in-situ time series was not at all like the model!!! On 11/16/00 I then changed back to earlier convergence scheme where the FIXM is not smoothed. The 3 D arrays still are not being written out. On 11/16/00 I noticed that the mkvmodeltd.for routine was not as on the vax [.ipsd2020.normn}subdirectory, but was an older version. I re-wrote the newer version (which does not need a scratch file) and replaced mkvmodeltd.for with the newer version. I also checked to see that the MKGMODELTDN.for subroutine was unchanged from the vax and I revised it so that it passes two scratch arrays through its calling sequence. The newer version with iterates identically to the old version. There seems to be no change in the write3b status throughout the iterations indicating that nothing gets clobbered during the iterations. On 11/21/00 I fixed the fillwholet.for subroutine so that now there is now a scratch array is passed through its calling sequence so that the above error noticed on 11/16/00 was fixed. Things seem pretty much the same when I do this, and now the t3d files are output. On 11/21/00 I stopped using distributed weights, since the program NOT using distributed weights seems to converge as well as the program version that does use distributed weights. The EA answers are somewhat different, but not much. On 12/5/00 I found an error in the write3dinfotd1N.for subroutine in forecast mode. The subroutine seems to give two 3d files that are incorrectly labeled (and interpolated) because AnewI is wrong. I believe AnewI is wrong because the input N values are wrong in the forecast mode. The problem was in the use of XCintF where there was not an N+1 copy of the XCintGG array into the XCintF one in the main program. In the forecast mode this caused the bomb. However, I see that whatever reason that there was a forecast mode for the write3dinfotd1N.for routine, it does not exist any more. I have therefore eliminated the forecast mode for write3dinfotd1N.for in both the main program and in the subroutine so that now all that is needed is a single call to write3dinfotd1N.for. On 12/7/00 I found an error in the MKTIMESVD.for routine in that the NmidHR value was subtracted from the time to start the beginning day. In the main program, this value was labeled number of hours "before" midnight and was -10 for Nagoya. The main program now reads number of hours "from" midnight and is now -10 as before for Nagoya. UCSD is +8 or +9, as you know, depending on daylight savings time. The MKTIMESVD.for subroutine now adds this value to begin the day at midnight. This has the effect of changing all the extensions of the t3d files, since the temporal intervals now begin at a different time - midnight at Nagoya. If the t3d files are interpolated by 1 in write3dinfo1N.for, this now has the effect of dividing the day into times before midday in Nagoya and after midday in Nagoya. If the files are interpolated by 2 in write3dinfo1N.for, then the t3d files are divided (approximately) into midnight to 8am, 8am to 4pm and 4pm to midnight. The extension values have been checked by PL HE Pandora on the vax. On 12/7/00 I found that the forecast 51874 run, which terminates data at this time, or 12 UT on November 25 (355 sources found), gives the last matrix at 1970.064 (centered at 1970.0825 or 2 UT November 26). The forecast run at 51874.5 (0 UT November 26) (371 sources found) gives the last matrix at 1970.064 as well. Since this does not give even a one day 3d matrix advance forecast, I have now changed the value of NTV and NTG by one more so that the 3d matrix is written to a file for at least one day beyond the current time. On 12/7/00 I found that in the forecast runs there were as many sources used as there were source values within the NLG and NLV increments set by XCintG and XcintV. I fixed this in the main program so that now all the data that is used in the forecast mode comes from times that are less than the Carrington rotation of the Earth at the time given input as the forecast time. The current mk_ipsd2020NN uses the IPSDTDV2020NN.f main program. On 01/30/01 I copied all the programs of the mk_ipsd2020NN compilation over to the for/IPSd2020NN subdirectory so that this program and subroutine is now complete and seperate from the other fortran programs. On 01/30/01 I also began to alter the FIXMODELTDN.for and mkvmodeltd.for subroutines so that they better reproduce in-situ velocities. I have done this by copying all the files to the for/IPSd2020 subdirectory so that this program and subroutine is now complete and seperate from the other fortran programs. I renamed the IPSDTDV2020NN.f program to IPSD2020.for When I ran the program for 1965 in this subdirectory, the results for 16., 16., 0.25, 0.25 and 0.40 for the EA_FILE idl run_compareaceLp run was 0.771 and 0.066 and this is slightly different from before which was 0.688 and 0.058. I don't know why there is this slight difference. On 01/30/01 I began a lot of changes in mkvmodeltd.for in order to get better velocities into the matrix. I think the original (or perhaps the model 01/31/01B) is the correct one for weighting, but I tried a lot of things to see if there could be an improvement in the in-situ comparison. There was not a whole lot of difference when using the things I tried, below. C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! Original C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I)*VSN ! Added B. Jackson 01/30/01 C VWT = VWT + VWTij(J,I) C VPERP = VPERP + VWTij(J,I)*VELO C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I)*VSN ! Old run long ago. C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01A C VWT = VWT + VWTij(J,I)*VSN C VWTi = VWTi + VWTij(J,I)*VSN C VWTij(J,I) = VWTij(J,I)*VSN C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01B Seemed ~best so far C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I)*VSN*VELO C VWTij(J,I) = VWTij(J,I)*VSN*VELO C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 02/01/01A C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I)/VSN C VWTij(J,I) = VWTij(J,I)/VSN C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 02/01/01B (Original) C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I) C VPERP = VPERP + SQRT(VWTij(J,I))*VSN*VELO ! 02/01/01C C VWT = VWT + SQRT(VWTij(J,I)) C VWTi = VWTi + SQRT(VWTij(J,I))*VSN*VELO C VWTij(J,I) = SQRT(VWTij(J,I))*VSN*VELO VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01B Seemed ~best so far VWT = VWT + VWTij(J,I) VWTi = VWTi + VWTij(J,I)*VSN*VELO VWTij(J,I) = VWTij(J,I)*VSN*VELO C VPERP = VPERP + (VWTij(J,I)**2)*VSN*VELO ! 02/02/01A C VWT = VWT + (VWTij(J,I)**2) C VWTi = VWTi + (VWTij(J,I)**2)*VSN*VELO C VWTij(J,I) = (VWTij(J,I)**2)*VSN*VELO VW = VWTij(J,I)*VSN*VELO ! 01/31/01B rewritten VPERP = VPERP + VW VWT = VWT + VWTij(J,I) VWTi = VWTi + VW VWTij(J,I) = VW Thus, I will settle on the version above. All other versions of the program should incoporate this weighting which essentially places all the line of sight variations into the weight. The nominal 16., 16., .65, .65, .25, .25, .4, run of 1965 gives 0.647546 and 0.229140 for the density and velocity correlations for the restricted data set and ACE in-situ measurements around the time of the July 14 CME peak. Other combinations of parameters give higher correlations, but none give the same density values in-situ/model with 16. and .65 as do these. The run of velocity deconvolution alone (2/12/01) does not allow the parameters to be set. This is now fixed in the main program (2/12/01). The version of the program that deconvolves velocity alone (both constant density and that uses mv^2 = constant) bombs in gridsphere with bad VMAP values before any iterations are gotten to. I have now fixed this and also fixed the problem when G-level alone is selected. The problem was in the setup for each initial velocity or density array. On 2/14/01 the velocity mv^2 works. On 2/14/01 the density mv^2 does NOT work to give velocity. Thus, I had better fix the Mk_D2V subroutine. On 2/15/01 I re-did a lot of the program to accomodate the modes that use a constant velocity and density and the mv^2 assumptions plus a write-out of the DMAPHOLE and VMAPHOLE data. These work now and have been checked using the routine to converge on both density and velocity, on velocity using constant density and on velocity using mv^2. The latest runs give the nominal using 16., 16., .65, .65, .25, .25, .4, for 1965 of 0.666897 and 0.417346. I think the "better" correlations may happen because fillwholeT is no longer double, but I am not sure of this. The correlations look considerably different now, too. Thus, to check I redefined using fillwholeT as before, and when I did this the nominal for 1965 looked the same as it has in the past in the iterative phase at least up to iteration 14 or so where the two began to diverge ever so slightly in the percentages, only. The correlations were: 0.647546 and 0.229140 which were identical to before. I thus returned the fillwholeT routines so that the times are no longer double-smoothed and I began a search for more "nominal" parameters. I also got a version of MK_DTV.FOR from the CASS01 fortran files and I renamed it MK_DTVN.for and compiled this version in the IPSd2020 subdirectory. The new version now works for mv^2 = constant for density iterations and gives approximately correct velocities. The new version also works for mv^2 = constant for velocity iterations and gives approximately correct densities. Oh, yes, another innovation is that the mv^2 = constant is now determined for each iteration with no convergence problems as far as I can tell. I now also determine the equatorial density and velocity (subroutines Deneq3.for and Veleq3.for so that the mv^2 = constant is now correctly applied to the other nominal value. In other words, the average speed at the solar equator is used to determine the mv^2 constant to produce density when speed is modeled. Likewise, the average density at the solar equator is used to determine the mv^2 constant when density is modeled. The nominal 16., 16., .65, .65, .25, .25, .4, for 1965 gives 0.666897 and 0.417346. Velocity convergence: Constant density and the nominal for 1965 above gives -0.123838 and 0.0811669 (The density above looks in the correct range, but it just is not constant. This is because the Mkshiftd rountine allows non-constant interactions to build up at height as long as velocities are non-constant.) Also, mv^2 density and the nominal for 1965 gives: -0.0242271 and 0.115706. End eq. vel = 334.436707 Density convergence: Constant velocity and the nominal for 1965 above gives 0.347806 and NaN Also, mv^2 velocity and the nominal for 1965 above gives 0.347806 and 0.318303. End eq. den = ? (Something goes wrong when this last is done in that the t3d file doesn't seem to write out OK. Also, it isn't clear why the correlation with velocity = constant and velocity with density given by mv^2 is does not give a different density correlation. Seems to me that it should.) On 2/16/01 I think I found the error at least in the last problem. The DMAP is not being updated each time the mv^2 is done. DMAP is consistently being held constant. Also VMAP is the same. On 3/15/01 I compiled the ipsd2020.for program with latitude values equal to 10 and ran the program on rotation 1965 with script compka2020script to check for a better convergence than with the old version of the program. The program works better and while the answers are almost the same in this version as in the test version of the program, inputs 15, .55, .25, .30 for 1965 for the old program give 0.840 and 0.230 and 0.844 and 0.230 there is an ever so slight difference On 3/19/01 I copied the test version of the ipsd2020.for program over as the official version of ipsd2020.for, and am running scripts with it as tests. To do this I needed to modify the ips2020inputm.f program in two places. This version of ipsd2020.for has better inputs and outputs for files read from the disk. On 4/2/01 a new include file was used for MAPRADIAL.H This new map radial is far more simple and uses equal intervals up form the source surface. No longer is level 10, 1 AU anymore, though. Paul claims this works better in his IDL programs. It does change the correlations a lot so that the original 15, .55, .25, .30 for 1965 gives 0.646 and 0.344 now. On 4/3/01 I replaced the CopyVtoVDN and CopyDtoDVN subroutines if nTG .eq. nTV and XCbegG(1,1) .eq. XCbeV(1,1) with a simple call to ArrR4Copy(nLngLatnTG,*,*) replacement. This allows the tomography to run faster and, in addition, the velocity and density tomography is more independent for the two 3D matricies, velocity and density. I am now running scripts to see if I can recover the same inputs as before. The above did not work too well because the Copy programs actually smooth over time significantly, and I couldn't get back the original results. I settled on placing fixes in the writes to the EA files and to the t3d files. On 4/10/01 MKDMAPTDN.for was modified to use a smoothed gridsphere to limit what is and isn't included in the crossed component limit (the gridsphere mode in /IPSHTD/MKDMAPTDN.for is now 3 rather than 4 in the versions of MKDMAPTDN.for in other versions of the main program). On 4/10/01 mkvmaptdN.for was also modified to use a smoothed gridsphere to limit what is and isn't included in the crossed component limit (the gridsphere mode in /IPSHTD/mkvmaptdN.for is now 3 rather than 4 in the versions of mkvmaptdN.for in other versions of the main program). This changes things so that now I need to again re-run the script to get the best fit to the CR1965. On 4/18/01 or thereabouts, I added two versions of Paul's write3d_info.for subroutine to the main ips2020.for program. On 4/23/01 I added Paul's adjustjdcar.for subroutine to the ipsd2020 program. This may change things a little, and since the scripts are still running, I plan to wait to find a better value for the parameters. On 1/18/02 I changed the variable 25.38 in subroutine MKTimesvd and both EXTRACTD's to 27.275. On 5/20/02 I began the process of making the ipsd2020 program accept a three-component xcshift parameter that includes latitude as well as time shifts. This involves modifications to the main program ipsd2020.f and mkshiftdn.f mkpostd.f extractd.f extractd3d.f write3d_infotd.f write3d_infod3d.f mkveltd_check.f get4Dval.f To do this a value of Rconst needs to be added to the parameter outputs of mktimesvd. This constant is then added to the inputs of mkshiftdn.f extractd.f to make them work in an accurate form. On 5/20/02 I modified mkpostd.for to incorporate the complete latitude change. This also takes a modification in the main program to incorporate this projected latitude. In addition, modifications need to be made to the calls to include this projected latitude (XLAT-->XLproj) Get4dval MkVmodeltd MkGModeltdn This is done to incorporate the new projected values of latitude that are already handled correctly in these routines. On 5/21/02 the version of the program that was transferred to CASS185 is the same as the version on both the /transfer and TIPSd2020 subdirectories with the exception that the transfer version does not do magnetic fields while the programs that Paul and Tamsen install are undergoing change. The version of the program on CASS183 in the main directory however, does deal with magnetic fields in the old way. Both the CASS183 and CASS185 versions give identical EA and E3 files using default parameters. The timing of the spike for the Bastille-day CME is late in the model compared to the in-situ response. Something funny happened, though, when the program was transferred, because the source numbers and the output values changed considerably and gave somewhat different answers in both the transferred and non-transferred version when it was run. On 5/22/02 I tried the sim_MOD routine in the ipsd2020 program to see if this makes a difference in the location/position of the peak for the Bastille-day CME. This run did not work. On 5/23/02 I discovered that I had inadvertently placed the new latitude shift in Get4Dval as a projected rather than a line of sight variable. There is no difference in current analyses, but will be in the future. This has now been changed in both the transfer and regular versions. I also checked to be sure that all the Get4Dval calls have time first in the calling seguence, since this is out of sequence from the index values. They all do. On 5/23/02 I tried the sim_MOD routine in the ipsd2020 program to see if this makes a difference in the location/position of the peak for the Bastille-day CME. To do this I have had to modify the mkshiftdn.f routine to include nCar, JDCar, NCoff and FALLOFF in the subroutine calling sequence. This change is not in the transfer version. On 5/23/02 I discovered an error in Get4Dval. It did not work with the Get4dval(3,....) set. This now corrected, I hope. On 5/24/02 I found that the sim_MOD routine would not work because arrays were not dimensioned properly in sim_Mod and shift_MOD. I fixed these now so that sim_MOD works. However, on 5/28/02 I found that the answers given by sim_MOD with default parameters using ipsd2020 did not work to give good answers even though the routine seems to work in ipshtd77. On 5/29/02 I discovered the problem with write3d_infotd3D.f. The bottom density map array written to disk that was copied into the file to be output was being multiplied by a file that was set to near bad values. This gave a write error to disk for a non-standard file. This is now fixed. The dummy values of VM and DM that are written in the subroutine need only have two dimension, and this has now been fixed in the main program and subroutine. On about 7/1/02 I replaced the fillmapols Xmid value entry because it was incorrect and I also placed fillmapols in the nv3f file prepare analysis. On about 7/11/2002 I changed mktimesvd.f so that the times begin about half a rotation earlier than they did prior to this date. I believe these changes are currently transferred to the cass185 computer version of the ipsdt program (10/31/02). On 8/16/02 I changed FillmapOLS so that it is now more automatic and so it has two modes. The first mode is as before to stich together the non-bad points at the point in the sphere the furthermost from Earth. As far as I can tell, the early version worked OK. The second mode limits the regions of the maps at the most distant longitudes from Earth so that these regions do not become too great (or small) in either density or velocity according to some limit. There is also a limit set on the angular distance that this effect can influence - now set at 135 degrees from the Earth. After several runs where there were problems with the program, on 8/17/02, I think I fixed this so that the program now works. On 10/30/02 I copied ipsdt.f to the transfer directory and recompiled it on that directory. The version of mkshiftn.f on this directory does not have several inputs that are present in the parent directory (see my note on 5/23/02) The recompiled version of ipsdt.f now (10/31/02) runs and has the same numbers of sources as the ipsd2020 version on the cass183 computer. There are differences still unaccounted for at the zeroth iteration time step for the first the g-convergence. The program runs to iteration completion (10/31/02). Late 10/31/02 I got the ipshdt program down from CASS185 and got oit to run and give the exact same iterative valuse as the ipsd2020 program I have on CASS183. The probelm was with fillmapols.f that had not been updated on the transfer subdirectory on CASS183. The ipsdt.f program now dies at the end, I expect from the updates in the extract routine. The IDL program on CASS183 does not now produce correct answers from the EA file generated using the current version of ipshdt. This is because the answers in the EA file are entirely different from what they should be, including negative velocities. I expect I have incorrect versions of the EXTRACT routines in the CASS183 library so that they now give bad answers in the EA files. Since I need to speak to Paul to fix this, I better wait til he comes in. On 6/2/03 I made a subdirectory TIPSd2020new so that I could take the latest version of my ipsd2020 and modify time in it to reflect real times and not rotations. On 6/2/03 the main problem in the program is that time is currently confused with rotation, and this is continued in the use of the include files. What is needed for the time-dependent tomography is to make time just that - probably in days from some beginning start time day several days prior to the main observation (as is currently done in terms of a solar rotation). The only time that rotation should come in is in mkshift where the rotation rate at a specific time needs to be used to get back to the surface map location at a given time. In this instance the actual rotation rate should be used for each time in order to correctly account for the variation of the solar rotation rate at any given time of the year. The files that contain these include routines in TIPSd2020new are: File : copyvtovdn.f *** File : extractd3d.f *** File : extractd.f *** File : fillmaptn.f *** File : get3dtval.f *** File : get4dval.f *** File : mkpostd.f *** File : mkshiftdn.f *** File : mkshiftdnn.f File : write3d_infotd3d.f File : mkvmodeltd.f (added 11/7/03 when mkvmodeltd.f did not work on transfer) File : mkgmodeltdn.f (added 11/7/03 when mkgmodeltdn.f did not work on transfer) On 6/3/03 I modified mkpostd.f so that it now calls ECLIPTIC_HELIOGRAPHIC8 and is now supposed to use the time of the LOS in DOY and fraction. The DOY is not yet input in double precision, but now it can be, and the time interval from the beginning should be OK as single precision to time precisions of about a minute. On 6/4/03 I modified mktimesvd.f to a version mktimes.f so that now the values needed for the above routines are available. I also modified the version of mkshiftdn.f to now not use ReConst and to use the now OK version of the dXCL and dXCT. There is now no RconstG or ReconstV. This changes the inputs to mkshiftdn.f By 6/5/03 I had gotten all the above to compile using the newest values of real*8 in time in Doy8, and it compiles and runs. On 9/4/03 I modified extractd.f so that it now correctly outputs doy8 and hour. On 9/5/03 I found that the mkshiftn.f routine did not handle the longitude shifts correctly in this version of the time dependent tomography. I thus implemented shifts in longitude in the corotating frame such that longitde shifts in terms of Carrington rotation are used that reflect the slow-fast variation of the Sun below the Earth and whatever given time of the year is chosen for the analysis. This bug most probably was present in the tomography since I implemented the three-element xcshift parameter shift matrix on 5/20/02. On 9/8/03 the program now runs and extractd.f produces an EA file from the data. The EA files still do not correlate well with the real data, so maybe there is yet another error in the shift matrix. On 9/9/03 in thinking over the problem I have come to the conclusion that the time of the observation sets the location on the Sun for the outward material flow and that the rotation of the Sun for the material observed at that time is simply the inertial speed of 25.38 days. Thus the location of that material on the solar surface is simply determined by the inertial speed. Thus, the version of the program before 1/18/02 was more correct and since that time the program has been in error. This is in keeping with the problem I found where before that time the program seemed more stable. I checked the shift matrix, and now both the flight time for the material and the shift in longitude are exactly the same as should be the case for radially outward-flowing material. On 9/9/03 something is still not right in that Nagoya noons are at 3 UT and this is now where the time intervals begin and end. However, the xcint intervals that should begin and end at local midnight now also seem to begin and end at local noon, and this is not correct. On 9/10/03 I tink I have proved to myself that the problem on 9/9/-3 did not exist. I then began work on the bomb in the first write3d_info.f routine and in the early afternoon found the bug - the calling list had an extra comma. This fixed, I then got another problem in write3d_info3d.f fixed and then another bug in extract3d.f fixed and finally the program runs through and gives reasonable answers with the default inputs. On 9/10/03 the default inputs are: begin ht -15 G-level space = 13.0 velocity space = 13.0 G-level temporal = 0.7 velocity temporal = 0.7 G-level power d = 0.65 G-level power v = 0.65 radial g fall gd = 0.15 radial g fall vd = 0.15 For the above ulimited 1964.6 gives EA - -0.231 -0.040, E3 - -0.260 0.062.. The limited gives EA - 0.036, 0.029, E3 - -0.016, 0.124 Back on 4/2/01 after changes in mapradial were made the values 15, .55, .25, .30 for 1965 gave 0.646 and 0.344. I'll now try these same parameters again. For the these ulimited 1964.6 gives EA - -0.164 -0.027, E3 - -0.233 0.061.. The limited gives EA - 0.122, 0.058, E3 - -0,057, 0.007 On 9/16/03 I got the script running again in order to determine the extent to which the settable parameters (filters, etc.) changed the analysis. The program is now very stable to changing filter parameters. However, the correlations never seem to be very good or at least never seem to give large peaks in the model, and in particular the large peak on 7/15/03 in the ACE data is not reproduced well in either the velocity or density models. On 9/16/03 I note that the way of determining vratio and dratio in mkshiftdn.f is different from the way it was done back in 03/20/01 in the IPSd1010 subroutine mkshiftd.f The way back then was to use the ratio between levels and accumulate this rather than using the current scheme of using the shift to the base level as is done now. On 9/18/03 I first tried the most recent version of the tomography program with the newest (cass185, I hope) IPS sources. I finally won! The tomography program now gives back the peak in the 1965 rotation that was present back in 4/2/01! Not only that, the peak is present in the model no matter whether the source surface is at 15 Rs or 1 Rs and no matter where the rotation begins - some differences are present when runs are made. The EA files give nearly the same correlations as do the E3 files interpolated from the 4D matrix in this case with only minor differences when 3 intermediate steps in the model are interpolated. On 9/19/03 I got the scripts running and was able to determine the best parameters - not much different from the ones found back in 4/2/01. The parameters 15, 0.75, 0.30 and 0.25 give EA correlations from the limited time series of 0.610 and 0.793 for rotation 1964.6 The density peak in the model does not go as high as the ACE data and is displaced somewhat right of the ACE peak. The velocity correlation reproduces the two velocity peaks and the dip between the events almost as in the best previlou correlations. The default parameters now work to provide the above results. Fidgiting the parameters to 15 0.75 0.25, 0.30 gives a larger density model peak with a slightly better density correlation for EA of 0.801, but a lower velocity correlation of 0.419. On 9/22/03 the results using scripts on CR 2003 (the May 29, 2003 CME) show that the parameters 16, 0.75, 0.30, and 0.25 fit best with 10-day limited E3 correlations of 0.253 and 0.551. The parameters 15, 0.75 0.30, and 0.25 (as above) give correlations of 0.088 and 0.481 The density peaks associated with the event are reproduced, but the dip between peaks is not as pronounced as it should be and the peaks are not as high as they should be. In this case, setting the parameters to 15, 0.75, 0.25 and 0.30 does not help increase the peak heights much, and does not make the correlations better. Although both ACE and model velocities are high (around 600 km/s), the low velocity correlation during the 10-day time interval comes from a peak observed in the ACE data that is not reproduced in the model. ** Thus the conclusion that I reach is that the parameters 15, 0.75, 0.30 and 0.25 are adequate values to use and that both Kenichi's technique and Tokumaru's (that was used to determine the 1964.6 correlation using that data set) give approximately the same result. Thus, I have now given Tamsen the task of placing the time-dependent magnetic field analysis into the new version of the program and checking to make sure the correlations work the smae on CASS183. On 10/1/03 with Paul's help, I discovered the problem that did not allow the wso files to be printed out. The problem was an incorrect bbwrite library file. With these files printed out correctly we placed the new version of the time dependent tomography on the Web on 10/2/03. On 10/6/03 we found the files on the Web were not updating correctly and the problem was being caused by mktimes not working correctly. The xctbeg and xctend times were being set as if they were divided by 48 rather than by 47 in the TIMECOORDINATES.H include file. xctbeg and xctend are now changed so that the beginning and end times in days are now given correctly. Mktimes was changed and checked. I am currently checking to see that the version of the program on CASS185 works the same as on cass183. On 11/7/03 I got the program working with double precision source times input. The subroutines revised to do this were: ipsd2020.f mkpostd.f readvipsn8.f writem_scomp8.f readgips.f (Will need work if Cambridge data are ever read again and this was NOT done.) The answers for the ipsd2020 program have changed only very slightly On 6/11/04 I changed subroutines mkgmodeltdn.f, and mkvmodeltd.f so that real*8 XCtbeg and XCtend are carried through the subroutine and into the Get3dtval subroutine. On 8/4/04 I found an error in Mkshiftdn.f Ever since times were placed in terms of days the search for shifts should have gone to earlier times but instead went to later times at the lower level. This was normally not a problem since the time normally began at the same value as the lower level and this was within the range of the time step from one level to the other when the steps were large. Slow velocities might have made an error here. This is now fixed. On 07/03/08 I began modifying the current version of the ipsdt file on cass185 to incorporate the newest changes. The current ipsdt program (named ipsd2020 re-named ipstd_0) has an update version of it of 3/22/07. On 07/03/08: The statements if(NLV.eq.0) bVcon = .FALSE. and if(NLG.eq.0) bGcon = .FALSE. were added before their respective mktimes. All of the readvispn have been changed to readvipsn8 and the outputs of the data files have been modified to include DOYSGG8 and DOYSsave8, and iReadOotyn8, iProcessOotyn8, and iReadOotyn8, iProcessOotyn8 in the main program. This requires the readvipsn8 program to be the one in smeiipstd, and to be changed itself in readvipsn8 subroutine. On 07/03/08 I also modified the extractn.f program so that it now allows an input of the names appended to each output file. This is needed to allow the main program to add new objects. I also included a querry at the beginning of the program asking which of these bodies you want to have an input parameter file for. The objects so far are: character cPrefix (NTP)*2 /'ME','VE','EA','MA','JU','SA','UR','NE','PL','UL','H1','H2','SA','SB'/ character cPrefixf(NTP)*2 /'m1','v2','e3','m4','j5','s6','u7','n8','p9','u1','hA','hb','s1','s2'/ Stereo A and B do not have ephemerides available in Fortram yet. On 07/04/08 I have now revised and streamlined smeiipstd.f (smeiipstd0_NHC - No Helios or Cambridge), and in so doing I have slightly modified the readvispsn8.f subroutine. This needs to be installed into ipstd_0 now that more precision is required of the ipstd program to read in source observation times in double precision. On 07/04/08 I have changed to two values of DOYSG8 and DOYSV8. The two routines needed changing in the main program when this is done are: MkPostd.f, (its name remains the same), and writeM_Scomp to writeM_Scomp8 On 07/04/08 the main new innovations to the smeiipstd0 program to date (since early 2007) are the installment of an output that is totally filled, the inclusion of gridsphere smoothing over the poles, and the inclusion of the higher resolution interpolated outputs. These now need to be installed into this program. On 07/04/08 I placed gridsphere with 0.0 in the last place rather than 90.0. I also have installed the mkdmaptdn0 and mkshiftd0 subroutines into the main program. On 07/04/08 all is installed into the main program and subroutines. There is still a bit more to do since the program asks are not yet complete. On 07/05/08 I discovered that the extractpositionn8.f subroutine was at the end of the extract.f subroutine. When I dropped the extractd.f subroutine from the mk_smeiipstd0 calling sequence, the extractpositionn8.f subroutine could no longer be found. I have now placed the extractpositionn8.f subroutine at the end of the extractdn.f subroutine. On 07/05/08 I downloaded the readgips.f subroutine from the Web, and I produced a version readgips8.f. With this last, the ipstd_0 main program now compiles in the directory ipstd_0 off SMEIIPSTD on new.cass183. On 07/05/08 the ipstd_0 program ran until a segmentation fault in mkvmaptdn.f because I forgot to remove the DMAPV in the calling sequence of the subroutine in the main program. I had a very difficult time trying to get the program to run without a segmentation fault, and I do not know why. On 07/05/08 I discovered that the mkvmaptdn.f subroutine also had a gridsphere2D in it that needed to have its last parameter set to 0.0 rather than to 90.0. I fixed this, renamed mkvmaptdn.f to mkvmaptdn0.f and recompiled ipstd_0. On 07/06/08 the ipstd_0 program still bombed at the end after all interations with a segmentation fault. I discovered that the version of copydtov and copyvtod did not include a scratch array about three arrays from the end. This is now fixed everywhere in the ipstd_0 program. On 07/06/08 I discovered an error in calling extractdn.f; FALLOFF and NinterD were reversed in the first of the two calls to extractdn.f On 07/06/08 I remembered that the r^-2 needs to be established for the current version of the write3Dinfo in ipstd_0 (as well as in the extractn.f) routine, but that this must not be present in the current versions of the high resolution writes to the the disk. Therefore, before and after each high resolution write I now remove the r^-2 falloff, and then put it back. On ~07/18/08 there are several changes to the ipstd_0 program. This program is split into two - ipstd_10, and ipstd_20 The former of these programs runs at 10 x 10 deg resolution and defaults to a half-day cadence in order to take the most advantage of the more abundant Ooty data. The latter is the same old routine that runs in 20 x 20 deg and one-day cadence mode for the normal STELab data. At about this same time, I installed the higher-resolution version of write3d_info3DM_HR.f into these programs and I also placed a write and the analysis copied from the smeiipstd0 program that outputs completely-filled "nv3o" files. This now works and gives better-looking nv3 files from the analyses, and completely filled files from the IPS time-dependent tomography. The high-resolution "nv3h" write now defaults a file that goes from 0-3 AU and this file for each time is 81 Megabytes for each file. There have been several additional cosmetic changes to the program to make it more user-friendly. These involve the inputs to the program and when the user chooses Ooty analysis in density this same is also chosen for density. In addition, the User now chooses whether to output a number of different files from the Ulysses or STEREO, for instance when these data are available at in the program at it's end. On 07/28/08, because the higher-resolution "nv3h" files often show higher-resolution features that are present in the kinematic model, I have modified the extractdn.f program to output STEREO data. This required a new subroutine, stereoorbit.f This new subroutine contains the current STEREO ephemeris from NASA. The stereoorbits.f program was discovered not to work, and to not agree with the runs made when STEREO locations were extracted in using Paul's IDL program. The reason for this was found in the subroutine kepler_orbits called by the stereoorbits.f program. There was a different definition of some of the orbital parameters in this latter subroutine. Paul fixed this, but it was also discovered that the Fortran version of ulyssesorbit.f has always been wrong, and so that if ever this was used it would not give the correct location for Ulysses. This is now fixed in both ulyssesorbits.f and stereoorbits.f and checked to agree both with the IDL routines, and for STEREO with the NASA ephemeris. The ulyeeseorbits.f subroutine is only strictly ood for the first Ulysses solar pass, and needs to be revised to include better parameters on subsequent ulysses passes including the current pass. The first attempts to provide STEREO data subtractions have been made using the incomplete IPS data from STELab, and the ipstd_20 program. This shows that this stereoorbits.f subroutine appears to work. More checks are currently being run. On 07/29/08 I discovered and error in the ipstd_0 (ipstd_210 and ipstd_20) programs. When the program was run without producing "nv3h" or "nv3o" files, the extractdn.f program at the program end output densities that were far too great (the Falloff had not been removed in the earlier data sets). This is now fixed, and the extractdn.f subroutine now gives proper densities whether or not "nv3o" files are produced. On about 08/01/08, I found that John had modified the readvips routine to incorrectly read Ooty data. I modified the readvips8.f program to read Mano's current data set correctly. This provided really good correlations of velocity with Mano's Ooty data set. This new routine was installed into the program. On about 08/15/08 I modified the readvips8.f and main program and added two subroutines to the main called writegoodsourceg.f and writegoodsourcev.f. On request, the program now writes out two files that duplicate the input files in every way except that the lines of sight the ipstd_10n.f program thinks bad are flagged bad. These output files can then be read by IDL routines that place lines of sight on skymaps. On about 08/18/08, Mano's sent new Ooty data (from 2007) with a different format. I modified the readvips8.f routine to read Mano's new Ooty data set format. The program still outputs the good source Ooty files with the old format for pickup by the IDL routines. These new Ooty data obtained at solar minimum do not give the same good velocities as do the data set from 2004. On about 08/23/08 Mario discovered that the program bombed when it was asked to provide Ulysses files. This caused a rewrite of the imput to the extract routine to: character cPrefix (NTP)*2 /'ME','VE','EA','MA','JU','SA','UR','NE','PL','Ul','H1','H2','Sa','Sb'/ character cPrefixf(NTP)*2 /'m1','v2','e3','m4','j5','s6','u7','n8','p9','u1','hA','hb','s1','s2'/ On about 09/08/08 I added parts to the main program and to mkdmaptdn0.f and mkvmaptdn0.f (now called mkdmaptnd0n.f and mkvmaptnd0n.f) to on request write out er1_ and er2_ files that show 3D confidence levels that can be imaged similar to nv3 files. The er1_ files contain the composite line of sight crossings that are used to determine (or not) that the region has beed deconvolved. This array was always brought out of the two above subroutines, and now it is used as input to the write3d_infotd3Dm.f routines. The write3d_infotd3Dm.f routine needed to be modified to not allow the normal density and velocity with distance factor to be multiplied to the density and velocity as is done for density and velocity. Before use the array values are modified to reflect the Gaussian temporal and spatial filters used. The er2_ files are the composite weights on the source surface, and these were never output from the mkdmaptdn0.f and mkvmaptdn0.f routines. These arrays contain information not only about the weighting functions, but also about the densities and velocities along the line of sight used to weight the source surface. They look not only something like the line of sight crossings, but also like the densities themselves. Before use these array values too are modified to reflect the Gaussian temporal and spatial filters used. On 01/02/09 an error was discovered in the data that provides the reconstructions. The lines of sight on the Carrington map seem shifted by about 120 degrees from what they should be. We do not know when or where this error crept into the program but suspect that perhaps the changes to the read routines may be the cause on 08/01/08 when the readvips8.f routine was modified. The current version being compiled is named readvipsnn8.f and called as readvipsn8.f. On 01/02/09 I found an error in the mkpostd.f subroutine. This would only give an error in the case of a year crossing. When a year was crossed the Doy8 used in ECLIPTIC_HELIOGRAPHIC8 would not have been correct. The value of tXC was corrected for a year crossing correctly as long as Adaybeg is correctly set as the total number of days in the preceeding year. This is now fixed in the mkpostd.f routine. On 01/02/09 I found an error in the main ipstd_20n.f program. In this program the value of Idaybeg that goes into the Mkpostd subroutine was never set. The variable Idaybeg (that comes out of the mktimes.f subroutine) was never initialized to the value from the mktimes.f routine that is either Idaybegg or Idaybegv. The resulting input of "0" in the mkpostd.f subroutine should only have been wrong when a year end was crossed, and thus only wrong when Ooty data is used that crosses one year. However, I also note that the way that Idaybeg is checked in mkpostd means that a "0" in this location rather than a 365 or 366 for the first 180 days of the year would have been interpreted as a year end crossing causing the condition to be set that was incorrect for the middle of the year. Because of the error (mentioned directly above) in the mkpostd.f subroutine, this would have made an error in the projected lines of sight. I note that in the smeiipstd program Idaybeg is initalized as it goes into the mkpostd.f subroutine. On ~03/24/09 Mario Bisi, on having Paul Hick print out the lines of sight, it was discovered that there was a problem in the newest version of ipstd_20n, in producing these lines of sight. This was traced over a week's time to using only the velocity only option using EISCAT data, and a IdaybegG rather than IdaybegV in the mkpostd routine. On ~5/11/09 I completed preliminary work begun about 4/10/09 on the version of the program that includes ACE level 0 data into the EISCAT analysis. This included modifications to the following routines: ipstd_20n.f --> ipstd_20n_in.f writelosmapvg_in.f mklosweightsm.f aips_wtf.f readace8.f mkpostd_in.f mkvmodeltd_in.f mkvmaptdn0n_in.f fixmodeltdn.f The current program is locked to use only acesw_2007.hravg data available in the same directory as the program is run. This happens in readace8.f. On 10/05/09 I tried the program working on new.cass183.ucsd with Nagoya data in 2007. Seems to work on rotation 2061. so far. These runs have formed the basis for the paper in Solar Physics in press in 2010 using CR 2061 data. On 4/15/10, I began to modify this program to run using ACE level 0 density data as well as velocity data. This change requires modifications to the programs: ipstd_20n_in.f --> ipstd_20n_inp.f mkgmodeltd_in.f mkdmaptdn0n_in.f On 4/16/10 I discovered that the XCtbeg and XCtend were not defined real*8 in mkvmodeltd_in.f. This is now corrected in mkvmodeltd_in.f. However,this same holds true for mkvmodeltd.f and thus may need to be fixed there. On 4/20/10 I was able to get ipstd_20n_inp.f running and use the in-situ ACE data to converge to the time dependent solution. To do this I used the default rotation 2061 (as in the SP article submitted in 2009) to run using STELab data. This required modifications to the two subroutines listed above, but mostly the differences are in the ipstd_20n_inp.f main program. On 4/20/10 I noticed that the ipstd_20n_in.f program is incorrect. The loop beginning: do KKK=1,NLmax+nTG ! Loop before deconvolution to set up VMAP, DMAP ends with NLG and NLV being decreased in size. I now have named new variables that no longer decrease NLG and NLV in size. These are: NLVBEG, NLGBEG, NLVEND, and NLGEND. To show which lines of sight not to use I now flag these bad in NBSG and NBSV. This error must not have affected ipstd_20n_in.f, but it potentially could have given problems. On 4/20/10 other changes to the ipstd_20n program include: Queries to whether or not ACE densities are considered. Querries to whether or not the densities are used to modify the base (1 AU) density. Changes to determine a value of GOBS in terms of g-level from the density in the main program before mkdmaptdn0n_in.f. A change to use the mean value of the density to change the value of the density at 1 AU and to point out this change if it is made. On ~4/25/11 the readace8.f subroutine that was revised to readace8m.f was included in this program. This new version of the subroutine is now supposed to be able to read two consecutive years of data if required, and to allow the program to set the year of the data file. This same routine version was provided to read ACE level 2 data (swace-20xx_hravg) readace2_8.f, and Wind data (20xx_WIND_hourly_averages) readwind8.f, and CELIAS data (celias_20xx.hravg) readcelias8.f The main program was revised to accomodate these new subroutines and to test them with the default being the ace level 0 data read. The subroutines as yet still must have the data in the same subdirectory where the program is run. On 4/25/11 I now use a Make file in this subdirectory on Bender that allows either an ipstd_20n_inp or an ipstd_10n_inp_rec main program to be compiled. This new Make file seems to provide intel-compiled main programs OK. On 4/25/11 the modifications made included those to the main program to write out a more concise statement of values that are produced in mktimes.f. On 4/25/11 I also added the calls to the main program for the new version of the mkshiftn0n.f subroutine that will now work in case a higher-resolution version of the main program is used (such as ipstd_10n.f). This subroutine is now called with the "speed" parameter input in the calling sequence. On 4/26/11 I put in the smeiipstd0n version of write3d_info3dM_HR.f in the main program. These arguments have to do with the limits of the error matrix on the high resolution files. The ipstd10n program now includes this newer version of the write3d_info3dM_HR.f subroutine. To fix this problem required that I modify the main program to include these new arguments and to initialize some of them with questions asked in the main program. The program now works and writes high-resolution files in the main program ipstd_20n_inp. On 4/26/11 I now ask the person running the program to what height (rather than to what number level) he wishes to output the high resolution files. These are defaulted to from 1 to level 31 for nMaps as before. On 4/13/11 I noted that on 3/4/11 I looked carefully at the 3D reconstructed ecliptic and density cuts for the two above resolutions. There are significant differences. The same structures are present near Earth generally. Both 3D reconstructions show the structures to move outward from the sun with the same speed. The data in the higher-resolution files seem to make more sense in some ways since there are more density loops distinguished in the higher resolution files, and less structure corotating in the hemisphere opposite the Earth. I guess this will need further study to see if the in-situ at Earth is better served with the higher-resolution reconstructions. On 4/26/11 I provided a way to limit the tomography if too few lines of sight are present in a rotation. This limit is set to 150 lines of sight. The program now asks you this limit, and then if this limit is not reached for either velocity or density, the program does not attempt to reconstruct the data using this option. If there are too few lines of sight for both then the programs stops and tell you you need to lower the limit (or quit). I haven't tested this programming yet. On 4/26/11 I changed the high-resolution file write so that the files written in forecast mode stop after the midpoint plus 0.2 of a CR is reached following the forecast time. This is the same as in the smeiipstd program now. On 4/26/11 I incorporated all the changed extract routines in the main program. This included, extractdn_rec.f extractd3d.f extractdn.f extractdn_intel.f The error was found because the extract routine seemed not to work well with CR 1964.6 when the extract routine was used just following the first real extract analysis extractd3d.f in the main program. This subroutine worked OK and always did. However for this routine when the longitude of the observer changes by more than half a CR, the non-fixed subroutine would not correctly make the extraction if this jump occurred at the very beginning of the extraction (for some specific times). Rotation CR 1964.6 was one such rotation, and the beginning of the extraction was one such time. The effect was to provide an observer location that was incorrect by a whole Carrington rotation for the extractdn.f routines. This problem will need fixing in all the extract routines, by replacing the statement C if(I.gt.-NTT) then ! Other times with if(ObsXCL.gt.-99.) then and initalizing ObsXCL = -100.0 earlier in the program On 4/27/11 I found and fixed a major bug in the mkgmodeltd_in.f subroutine. The values of GM2(NL+N_IN) and GMWTi(NL+N_IN) were only dimensioned NL. The arrays here would surely have been overwritten. On 4/27/11 I also discovered and fixed a second major bug in the mkgmodeltd_in.f subroutine. Every so often the model density went negative in the program in the latter part where the model was to produce a model value. This caused the value of GM2(I) to become NaN, and this is now fixed by limiting the densities obtained in the model to positive (but very small values) in the latter part of the program as in the main program. On 4/27/11 I also found a major error in the main program. Just prior to the call to the fixmodeltdn.f subroutine, and following the call to the mkgmodeltd_in.f subroutine, the observed density values GOBS(I) were sometimes a bad value without NBSG(I) being set to zero. This occurred only in the analysis beyoind NLG observations. This is now fixed by placing a procedure in the main program that set the value of NBSG(I) = 0 for these values. On 4/27/11 I also found and fixed the reason that the SIG parameter sometimes became NaN in the fixmodeltdn.f subroutine. Every so often the value of density became so low that in the fixmodeltdn.f subroutine FIXR(I) became too low for SIG to be calculated. The limits to: if (R.gt.10.0) R = 10.0 ! Fix in 4/27/2011 BVJ if (R.lt.0.1) R = 0.1 are now in place in the proper location to solve this long-time problem. On 4/27/11 this program now runs through and gives answers. There still seems to be a "bug" in the program, however. The extract routine that produces the e3 tomography density file gives density that goes from 0.0 to 0.015. The density looks to peak in tomography IDL on July 15 at 0 UT. The peak in the file is at DOY 197 (July 15) at 06 UT at 0.016. The value on DOY 197 0UT is 0.014. This density peak for the Bastille-day CME is timed about as it is supposed to be. The velocity magnitude for this same e3 file seems OK, but the minimum of the tomography speed is located on 07/11 about 4 days earlier than it is supposed to be at about 300 km/s. In the e3 file this minimum is placed at 316 km/s at DOY 193 (July 11) 12 UT. The ACE l0 data shows this minimum at about July 15 at 0 UT. The subroutine that produces this is called the extractdn.f (2nd Extractdn) subroutine. The E3 ACE L0 agreement for density has a tomographic peak at 07/14.5 to about 90 particles and the E3 file confirms this peak on DOY 196 18 UT and 197 0 UT to 87 and 81 particles respectively. Other peaks in this file seem about right. The E3 ACE L0 tomography velocity has a minimum at 07/11 like the e3 file, and is confirmed in the E3 file at 296 km/s on DOY 193 (July 11) at 6 UT. This file is produced by the Extractd3D.f (Before Extractd3D) subroutine. The Ea ACE L0 density tomography peaks at 07/15 6 UT to 119 particles. The IDL plot seems to be at 0 UT and is at exactly the same time as at ACE L0, but the L0 peak goes to only about 47 particles. The Ea Velocity minimum is again at 316 km/s on DOY 193 at 12 UT 4 days earlier than at ACE. The extract routine that produces this is the extractdn.f (first extract) subroutine. On 4/27/11 I think I found the reason the e3 file was too low - one falloff too many before the last extract routine. On 4/27/11 I also found and fixed the reason that the SIG parameter sometimes became NaN in the fixmodeltdn.f subroutine. Every so often the value of density became so low that in the fixmodeltdn.f subroutine FIXR(I) became too low for SIG to be calculated. The limits to: if (R.gt.10.0) R = 10.0 ! Fix in 4/27/2011 BVJ if (R.lt.0.1) R = 0.1 are now in place in the proper location to solve this long-time problem. This program now runs through and gives answers. On 4/28/11 I found another error in that just before the FixModeltdn subroutine for both the velocities and the densities, a limit placed on the the model was activited incorrectly. The line read: if(WTSV_in(I-N_INV).eq.0.and.I.gt.NLV) then and should have read: if(WTSV_in(I-NLV).eq.0.and.I.gt.NLV) then This made the program bomb sometimes, but the limit should surely not been there. On 4/28/11 I found that when the extract routine is used, it limits the final time cutoff to the time XCtendG. Since this is the current time, no forecasts are possible using the extract routines. I have now fixed this by adding a small 4-day value to the current time but only when the forecast mode is evoked. This seems to give proper values beyond the current time. On 4/30/11 I changed the main program to default the PWG and PWGG density proxy to 0.20 (from 0.35) this works better with both the current IPS proxy and the ACE level zero density conversion to give a better fit (more of a one-to-one correlation) to the IPS-derived densities. I suspect this is a value that somewhat unique to the lower than average ACE level 0 density excursions, and the day averages that are used in the time series to compare the IPS tomography with the averaged ACE level 0 data. On 4/30/11 and 5/1/11 I was able to modify the main program to decrease the g-level to fit the ACE Level zero data better. This results in a lower density level than the nominal and is probably special for the ACE level 0 data where the mean value and lowest densities are lower than in any of the other space experiments including the density data from ACE Level 2 analysis. There is now an option in the main program to provide this innovation. The option works by determining the mean value of the IPS level from the density conversion from the in-situ data following the change of the mean density. The mean IPS level for the interval is supposed to be 1.00. If it is not, the new mean is used as the IPS G-level mean and the IPS g-values less than the original value of 1.00 are altered to provide this new mean g-level. This works pretty-well and is iterative in that it allows the current tomography itself to provide a best fit to the mean g-level. A better way would be to decrease the total in-situ density over the interval to provide a more accurate tomography fit to the arbitrary density, even if this lower density value is unrealistic. As yet no process has been built to iteratively give a best fit using this idea. On 5/2/11 and slightly later, I changed a few items in the extract routine to hopefully forecast densities (and velocities) more in advance in a better way. The main changes are near mktimes so that the times produced extrapolate into the future as much as possible in extractdn.f in forecast mode. This was done by changing NTV and NTG before mktimes.f when the system is in forecast mode. This is being tested in the IPS forecasts from Toyokawa. Perhaps we should not extrapolate this much, but time will tell. On 5/18/11 there was an error discovered in the main program. The primary error was caused by the error limit in the main program asking if asking "Do you want the density error to limit HR densities?$no',bDdener" bDdener was set correctly but bVdener was set to .TRUE. and never accessed. When the error files were not used bWrerr set to .FALSE. this caused the bVdener in the high resolution writes to severly limit the volumetric data. This is now fixed in the main program and the production of good limit files is now still made even if bWrerr is set to .FALSE. I also modified the high resolution write program to initialize the values of DENER and VENER in the the write3d_infotd3dM_HR.f subroutine. On 5/18/11 I modified the main program so that in forecast mode the values of NTV and NTG to: if(bForecast) then ! For forecast mode NTV = nint((float(NTV) + 3.0)/2.0) + 7 NTG = nint((float(NTG) + 3.0)/2.0) + 7 end if The old version of the program severly limited these values (to two days earlier) in the high resolution write to a little over a day from the current run time. These values were 29 and are now set to 31, and so the forecast now goes a little over 3 days into the future. On 6/22/11 I fixed an error in mktimes that produced a value of nT in forecast mode. When the input value of nT was smaller than the default, then the value set was what had been calculated outside the subroutine. When this was an odd number the program passed this number on to the outside. This number needs to be even to work to give values timed according to the exact temporal intervals in the extract routines. In any case, this is now fixed in the forecast and the same answer is available in the non-forecast version of the analysis. The value above of 31 did not work in the current scheme. On 7/31/2011 and a week earlier, I fixed the magnetic field output for the time-dependent routine. First, I added a new call to the main program ipstd_20n_in-situ/test_mag$ ./ipstd_20n_inp_mag_mod_intel nagoya=nagoya,,yearly, nso_noa=nso_ktpk[4]_[3].fts,$DAT,map,nso_ktpk,hcss called write3D_infotd3DMM(). This write output version of the program actually procudes no file output, but provides the velocity array needed to extrapolat the mpotential magnetic field outward. In this way, if the magnetic field is to be extrapolated, this write subroutine needs not be run ti produce output earlier in the program. The actual error in the program was a two-part problem. Paul produced the original programming. The program first died with a segmentation fault when magnetic field wwas asked for. The location for this was found in spiney.f. It was caused by the input array in the main program not being set large enough. This scratch array is called BR2DT, and should be increased to & BR2DT(nLng,nLat,4*nTmaxG), !scratch in the main program from BR2DT(nLng,nLat,nTmaxG). When this was done, the subroutine did not bomb, but the output was zero. The reason for this result was eventually found to be caused by an error in the Write3D_bbt.f subroutine. The erroe was in the call: Pss(3) = ArrR4Interpolate(nTim, TT3D(kT3D), TTi+XC3D( loc3D(iLng,iLat,iRad,iTim,3) )). The original array TT3D was begun at TT3D(iT3D) which began the interpolate routine in time following the location of the needed maps. This had no effect when the array began at that time, but in the current Write3D_bbt.f subroutine, a search is initiated at the beginning to determine the earliest good time, and the numbers of needed arrays and files is used from the location kT3D rather than iT3D in the current Write3D_bbt.f subroutine. I should add that the problem was fixed in Paul's on-line version of the Write3D_bbt.f subroutine, but not in the versions that I found on my bender computer. On 9/27/11 John found an error in writing out the SRCGV files in the subroutine writegoodsourcev.f. The write must include NLV (not NLG) as in current programming. This error is now fixed in this subroutine. There is still another problem in the writegoodsourcev.f program in that the line if(NBSV(I).eq.0) write(SRCGVN(50:53),'(I4)') IBADVOBSN ! Bad Nagoya velocities should read if(NBSV(I).eq.0) write(SRCGVN(50:54),'(I5)') IBADVOBSN ! Bad Nagoya velocities to write out the correct bad velocities values into the nvgoya.xxxx file at this location. On 9/27/11 John also discovered a similar error in the readace8m.f subroutine. The current line iVel = nint(VEL) should read iVel = int(VEL). Under this line if(DEN.eq.-9999.9) DEN = -9.99999 needs to be placed to provide a bad density value that fits into the gvalue space provided and output in subroutines in the main program. These subroutines fixes need to be carried through to the other than ACE level 0 read routines, and checked that the work with bad in-situ data. On 9/28/11 I made changes to the print out of the last sources values so that they now state that they are in-situ or IPS, etc. On 9/28/11 I added a subroutine to the main program that is called writegoodsourcegv.f. If NLV = NLG and N_ING = N_INV, then this subroutine outputs a single combined file of data called negoya.xxxx. On 9/28/11 and 9/29/11 I found a bug in the tomography program that trucated the analysis before last weights were provided for the sources at the end of the observing interval. This has the effect of not allowing the last IPS sources to be counted in the forecast analysis. The changes were The statement if(bVcon.and.XCEV(K).gt.XCintV(NTVV+2).and.XEV(K).le.XEhighV) then should read if(bVcon.and.(XCEV(K).gt.XCintV(NTVV+2).and.XEV(K).le.XEhighV).or.(K.ge.NLV)) then This now allows the last remote-sensing sources to be counted. I further put a condition on the following write statement so that no print was written if no sources are found in the interval. On 4/1/12 I fixed a couple year "end problems" that have come up (because of the year end). These corrections were put in place in the mag version of the program today. There was a "or" in the main program about the change of year and day that was set to an "and". This does not help. There was a L=NLV+1,I_INV that was absoutely incorrect, and John found this. The main problem causing the program to bomb was the fact that the VMAP array was not initialized. The SMEI tomography handles this by always providing a velocity array that starts with a constant 400 km/s velocity. In this program, the problem presented itself because there were absolutely no remotely-sensed velocity sources in the rotation. There is now a trigger that if always left at zero, because there are always bad values in all the VMAPs, a provision is made to provide initial constant velocities to start. This same fix was applied to the g-level data, if ever there are velocities and no g-levels. I also fixed the output statement about the numbers of good velocity values and good in-situ velocity values to be output separately so that now the few fixes provided that allow tomography runs with only in-situ data to proceed and to give the numbers of in-situ values. On 4/1/12 I also fixed the ACE read subroutines. These subroutines never were able to cross from one year to the next. They now do. They do this by checking iostat and when this is not 0 they loop back and open the next year file and add the data from this file up to the cut-off time specified. On 5/30/12 I worked to place all in-situ reads into the current program so that they could be input with a wildcard placed in the command line. This required placing a cwildace, etc. parameter into all of the in-situ data read routines, and typing these wildcards character*80 in the read and main program. The defaults are now the zshare directory as specified in the main program. On the command line one types one of the following, or here at UCSD simply does nothing, and leaves the command line blank. ace=$DAT/insitu/acesw_????.hravg ace2=$DAT/insitu/swace_????.hravg wind=$DAT/insitu/wind_swe/????_WIND_hourly_averages celias=$DAT/insitu/celias_????.hravg In addition, On 5/30/12, I worked to provide the second request (of density to default to the velocity version of the in-situ velocity parameter chosen - these in ipstd_20n_inp.f. These tested all seem to work at least through the second iteration. On 6/5/2012, I attempted to find out why the magnetic field for CR2059 has a spike in the data in the middle of the rotation. To do this I modified the writebbt.f program (writebbtm.f) to print out intermediate mag field values. I found the spike in the source surface maps on CR 2059 when I did this. This implies that the spike comes from the original source surface maps that are spline fit. I noticed that the magnetic field data for the rotation goes through to CR2059.0861, and then again picks up at CR2059.7846. This is a big gap, and it may be that the map on one or other end of the gap has bad values. On 6/10/2012, I fixed the cut-off beyond the input date to eliminate in-situ data points greater than that date in forecast mode in all the current ipstd programs ipstd_20n_inp, ipstd_20n_inp_mag, and ipstd_20n_inp_mag_mod. This did not work in forecast mode before this time. On 8/23/2012 I found an error in the input format for the wind data. The read skipped 68 rather than 58 values, and this let the density error rather than the density be input to the tomography program. With this fixed, I expect a far better result in the outcome when reading the Wind in-situ data as input. On 8/24/2012 I found that the ipstd programs all used bGLACE for fitting the mean value of the G-level, but that this same trigger bGACE was used as a trigger to use the in-situ densities in the main program near fixmodeltdn. I have now change this so that bFACE is the trigger in this location as appropriate. On 8/25/2012 I found an error and fixed the problem with the CELIAS density reads. There was an extra G on the end of XCEGG in the input read routine in the main program. The Wind density read is still not fixed even following a complete replacement of the input calling sequence. On 8/25/2012 I changed all the velocity read routines so that they now input floating velocities into the observed velocity arrays. Prior to this all velocity values were rounded to the nearest integer in the input read arrays. On 8/26/2012 I found the error in the Wind routine. I was reading the wrong parameter from the Wind data file. The correct density parameter requires a skip of 20 spaces following the velocity read. When this is done, the Wind correlations generally come up to better than 0.9. I am sorry these reads took so long to fix. On 10/12/2012, I modified the main programs on all of the ips routines to allow an output of Messenger data on or after 2007. The original programs limited the outputs to on or following 2005. On 11/07/2012 I fixed a small error in the main program that would have hindered in-situ reads of Ace level-2 data using a wildcard. On 11/08/2012 I was able to get the in-situ reads to allow a wildcard input such as: ace=$DAT/insitu/acesw_[4].hravg This is similar to the way John has gotten the program woirking in the past at the CCMC. All four data reads were checked On 11/08/2012 I was able to get the in-situ reads to go though the year end. All four data read inputs were checked. On 11/15/2012 I began modifying the ipstd_20n_inp_mag.f program to read MEXART data and to include source size in the input. I modified both aips_wtf.f and mklosweightsm.f and the main program to include a source size into the input. The resulting subroutines are called: aips_wtfm.f mklosweightsmm.f The call to mklosweightsmm is now included in the main program as: call MkLOSWeightsmm(NLOSWV,dLOSV,NLOSV,NTVD,SSV(NTVB),XEV(NTVB),DISTV(NTVB),WTSV(1,NTVB),PWRV) Here SSV(NTVB) is the new input source size value. In this first attempt, I initalize all the source sizes to zero at the beginning of the main program. The one MEXART velocity source size value each year in the main program is input specifically after the Nagoya read by: do I=1,NLV ! MEXART quick fix if(SRCV(I).eq.'3C48M ') then print *, 'MEXART SOURCE FOUND ', SRCV(I), ' I =',I SSV(I) = 0.25 end if end do The above implies that the source read is listed 3C48M, where the "M" is the unique identifier. The source size array is passed to mklosweightsmm.f which then passes it to aips_wtfm.f where it is used in the analysis. On 11/16/2012 I also modified the program readvips8.f and called it: readvips8m.f This allows a read of the Nagoya data using either a version that reads the data with the old IPS inputs, and can also read the IPS inputs from Nagoya with RA and DEC values. This is used to read the MEXART data for 3C48M using RA and DEC as the source location. Currently, the source needs to be placed in the proper time location within the Nagoya file. On 11/16-18/2012 the program: ipstd_20n_inp_mag_mex.f compiles with the Makefile and runs including the two MEXART 3C48M radio sources to date (in 2009 - CR 2082, and 2012 - CR 2123), and inserts these into the tomographic program properly. The two sources are accepted into the tomography, and have been used to provide IDL image outputs with the MEXART data included. On 12/19/2012 I was told by Fred Ipavich that CELIAS data are now available in real time as hour averages. Shortly thereafter I modified the ipstd_20n_inp_mag.f program to read these data by including two new subroutines readcelias8R_d.f and readcelias8R_v.f. On 12/31/2012 I discovered that the density read programs all limited velocity on the reads rather than density. This is now fixed in the ipstd_20n_inp_mag.f program and in all density reads. To do this required that the value of DMAX and DMIN be used (unlike presently) and that dlimitu be changed to DMAX dlimitl be changed to DMIN in the main program. On 12/31/2012 I was able to get the real time CELIAS analysis working by changing the readcelias8R_d.f and readcelias8R_v.f subroutines to remove a "*" from the end of the file. Before this there was a ???? appended. This now works with a $DAT wildcard on the command line call celias=$DAT/insitu/realcelias/celias_realtime.hravg* This is not too elegant, but at least this also now works in sync_ips_daily when the program in run on the web. The current real time celias does not allow other than a space following hravg* in the calling sequence. On 12/31/2012 I modified the main programs ipstd_20n_inp.f and ipstd_20n_inp_mag_mod.f programs so that they can use the CELIAS data (and all the modified in-situ read programs) correctly. On 12/31/2012 I modified the main programs ipstd_10n_inp.f and ipstd_10n_inp_mag_mod.f so that they can use the CELIAS data (and all the modified in-situ read programs) correctly. The ipstd_10n_inp_mag_mod.f program had a considerable number of overwrites from the editor, ugg. On 12/31/2013 I modified the main program ipstd_10n_inp_mag_mex.f so that it can use the CELIAS data (and all the modified in-situ read programs) correctly. On 08/25/2013 I began to modify the program ipstd_20n_inp_mag.f to run using a general input - from, for instance, KSWC, Ooty, or Nagoya data. I have termed this program ipstd_20n_inp_mag_g.f until I am able to get it to run with both Nagoya or KSWC data. On 08/29/2013 I found an error in the Ooty read routine in the conversion of RA and DEC to rLng and rLat for negative values of DEC. This error seemed present from the beginning and gives a rlat that can be incorrect by as much as a degree. This is now fixed. On 08/30/2013 I was able to get the program ipstd_20n_inp_mag_g.f to run with the newly-modified readvipsn8.f subroutine. The subroutine now has an ireadGen and an iprocessGen entry that allows both KSWC.XXXX or nagoya.XXXX data to be read using inputs of the RA and Dec in J1950 coordinates. The program works and converges using both data sets. On 08/30/2013 I decided to include a second readvipsn8.f subroutine in the main program named readvipsn8g.f This subroutine now hosts the read of different programs using the ireadGen and an iprocessGen entry calls. This works, but in doing this I found that the readvipsn8.f subroutine was really from the readvips8.f subroutine called that externally, but inside called readvipsn8.f. I modified the readvipsn8.f subroutine when I did so to provide the gen outputs. I hope this does not bother. On 08/30/2013 I produced a new subroutine precession_drive.f that places all of the precession in the gen part of readvipsn8g.f into a single subroutine. The precession_drive.f subroutine is more general than the version that was written in the readvipsn8g.f subroutine. The precession_drive.f subroutine allows RA and DEC precession from J1950 to present, or from J2000 to present, or J1950 to J2000 or finally from any date to any date just by inputting a mode switch. The subroutine was tested to work the same as the version in the readvipsn8g.f subroutine. An added output available is the precessed RA and DEC in character format. On 09/05/2013 I changed the value of nMaxT from 250 to 600 in the write3d_bbt.f and the write3d_bbtm.f subroutines. On 09/05/2013 I corrected the error in the Ooty read program in the readvips8.f subroutine (that is called as readvipsn8.f, for negative source declinations, and I recompiled all of the ipstd_10n subroutines. The recompiled ipstd_10n_inp_mag.f program no longer gives an error when the Ooty data are read without using in-situ parameters. The program also no loner bombs when it is asked to write out magnetic fields at the Ooty resolution. Thus, changing nMaxT from 250 to 600 in the write3d_bbt.f subroutine fixed the problem of obtaining magnetic field at the Ooty 10 x 10 degree resolution. The magnetic field writes at nv3h* resolutions is still in question. **** On 01/09/2014 there is a new standard format now available for the IPS analysis. Although preliminary, it is being implemented currently by STELab, and will hopefully be adoped by all other groups. I have begun to change the readvipsn8g.f routine to accomodate this new input. Whereas the readvipsn8g.f subroutine has been used to read STELab and is fit to read KSWC data, the current subroutine has been renamed to readvipsn8g1.f to distinguish it so that it reads the current KSWC data and the new Nagoya data format. The main program that uses this read ipstd_20n_inp_mag.f was changed to now be called ipstd_20n_inp_mag_g.f On 01/20/2014 I found that the program ipstd_20n_inp_mag_g.f that calls extractdnn.f and extracttdnnn.f does not go through the year end correctly, but does work when the new year begins in 2014. This happens on January 26th, but probably not before. This needs to be fixed at least before the next year. The extract subroutines do not work in the forecast mode before the 26 of January. The write_3D_infotd3D subroutines may not work correctly either. On 01/22/2014 I was able to fix the problem of the extractdnn.f program not going through the end of the year. I did this by re-writing a section of the extract code so that the check of the start and end dates works better. The fix that I produced needed the beginning year of the tomography run input to the subroutine. I have renamed this subroutine extractdvd.f to avoid confusion. The program still needs to be checked to be sure that other aspects of the extract program works. Also, the extracted time series now covers too much time in the forecast program and does not interpolate far-enough into the future for the ea time series. However, next I will probably find that the extractdnnn.f routine can be similarily fixed to work correctly and this will forecast farther into the future. On 01/22/2014 following the above fix, I noted that the year crossing still gives a one-day shift to the time series. I found the wau to fix this in the mkpostd_in routine where a day was subtracted from the second year time. It was a matter of whether the second day begins at 0 and counts 1 at the end of the day or if the count begins at 1 with the times in hour added to this. In the call DATE_DOY(0,nYr,cMon,iM,iD,iDoy) subroutine the conversion provides a count that begins at 0 and ends at 1 at the end of the day. The difficulty has existed ever since the IPS tomography analysis was re-programmed many years ago. I expect that the SMEI tomography is incorrect also, but need to check. On 01/23/2014 I fixed the extractdnnn.f routine by the same procedure that was used in fix extractdnn.f. The extractdnnn.f subroutine is now renamed to extractdvdm.f, and has a year of the beginning of the tomography input in the calling sequence. Both of the two extract routines have been tested to assure that they give the same forecast time series output time series lengths as before. Also, the standard time series tested seems to give close to the exact same result over the same time period when run in archival mode. ** ipstd_20n_inp_mag_v14.f ** On 01/27/2014 I began a process of renaming the ipstd programs with a version number. Thus, the latest ipstd_20n_inp_mag_g.f has become: ipstd_20n_inp_mag_v14.f. Subsequent small changes are intended to become: ipstd_20n_inp_mag_v14a.f in 2014, at least. On 01/27/2014 I began the process of placing errors into the general read routine, and into the subroutines that weight the tomography lines of sight. This was not completed, as I also began to attempt to make the main program write out base (and nv3h*) files. On 01/28/2014 I began the process of getting the main program write out base (and nv3h*) files. To do this I needed to modify the main program inputs to include a base routine that ran over several heights with appropriate input statements similar to the nv3h* write program. To do this I included a call to the write3d_infotd3dM_HR program at the end of the main program rather than use the current program that provides the nv3o* writes. This seems to have provided completely-filled base files. On 01/28/2014 I modified the write3d_infotd3dM_HR.f subroutine to begin writing out files in forecast mode that begin 20.0 days before the run time. This was also done for the write3d_infotd3dMM.f subroutine. On 01/28/2014 I completed modificatons to this beginning write, and also to the ending write of the nv3h* and base files (called nv3b* files) so that the nv3h* and nv3b* files in forecast mode begin 20.0 days before the run time and 4.0 days later. Yesterday I had found a way to produce nson* (magnetic field files) over the same time interval. I completed this modification so that the nson files are filled as much as possible even for the very first data files in the timing sequence. On 01/28/2014 I also modified the write3d_bbtm.f subroutine so that it now includes a multiplicative factor that can increase the magnitude of the source surface maps. This is called a BFACTOR, and is allowed an input if magnetic fields are made in the main program. I tentatively set this factor to 2.0 to better-match the in-situ fields, I hope. The insitu_drive_mag IDL program does not seem to be working correctly. _ On 02/19/2014 I was able to get the headers in write3d_bbtm.f and the write to work regardless of the use of a high density base first or not. Several parameters are passed to write3d_bbtm.f in the include file t3d_array.h, and these need to be filled before the write3d_bbtm.f subroutine if they are not done before this. One of the parameters [scale(2)] was re-dimensioned to scale(4) in the main program, and this is now passes to each write3d_infotd3d subroutine (there are 4 different ones used in the program), and thus this parameter needed to be redimensioned in each of these subroutines, and also in the main subroutine t3d_get_grid.f where it is set and passes by the t3d_array.h parameter file. This is a mess, but it works. On 02/19/2014 I was able to get the headers in both the nv3b and nson file to have the same values, or at least values comennsurate with the numbers of parameters that are printed out. The one exceptionis still the start time which sems not to be able to be set by a call T3D_set (T3D__TT , 0, XCtbegG ) call. This seems very strange. call T3D_set is in the subroutine t3d_get_grid.f, so there is something there that is zeroing TT, somehow. Dumb. I forgot XCtbegG is double precision. All is now fixed. On 03/10/2014 I found that the e3 files produced unnormalized densities, but normalized radial and tangential fields. This needs to be made more obvious, and so I have now made both magnetic fields and densities unnormalized, or thus, their true values. This has now been done in the ipstd_20n_inp_mag_v14.f and subsequent versions of the main program. On ~02/20/2014 I began the process of including any spacecraft into the tomography program in order to converge to that value of the spacecraft. in-situ values. This is was begun in a ipstd_20n_inp_mag_v14a.f version of the program. I first needed to modify the main program so that it could accept additional wildcards. The first tests are with STEREO A and B. Thus there are four new read programs: readSA8_v.f readSB8_v.f readSA8_d.f readSB8_d.f To do this, I decided to provide an additional subroutine to determine the spacecraft parameters. This is accomplished in the new subroutine: get_scparams.f On ~02/30/2014 once this worked, I hope, I modiified the mkpostdn.f routine to be a new better version: mkpostdn_ins.f This subroutine now works like the old, but for spacecraft other than at Earth, it needs to deal with a non-Earth location of the instrument in heliosgraphic coordinates.It also now seperates each spacecraft according to its position in the sequence of the input read. Thsi is now done. The old version of mkpostdn.f worked, but was not sufficient to provide a location other than at Earth that would also include lines of sight from that position in case the STEREO spacecraft are used as imagers. Now this should work, as it did for Helios. At this same time, I modified the main program in order to read both STEREO spacecraft as wanted. On ~03/02/2014, after running the program I realized that unless there were lines of sight that crossed past the spacecraft to be iterated, I would not be able to refine that portion of the image using the the tomographic program. For STEREO-A there must be lines of sight fairly close to the STEREO-A spacecraft in the test period 2115 because the measurements at STEREO-A get better when checked in situ. However, the STEREO-B in-situ extractions get no better than before. Thus there needs to be lines of sight that pass through each spacecraft where in-situ values are used, and a mechanism to provide pseudo observational data on the LOS past the spacecraft on that LOS. To do this I have modified the readvipsn8.f subroutine to: readvipsn8_ins.f so that it now inserts pseudo lines of sight into the observational data set at the times of interest. It currently does this at a settable interval - currently every 0.25 days. The observed values are now set in the main program, but this still does not work to give the required reseult in-situ. On 06/19/2014 I discovered an "error" in write3d_bbtm.f when I attempted to modify the magnetic filed strength by BFACTOR. BFACTOR worked to change the output 3D data but not the e3* file. This is now fixed so that both the output nson files and the time series files are changed when BFACTOR is set in ipstd_20n_inp_mag_v14.f. On 07/03/2014 I began the process of writing a subroutine to determine the orbit of the Rosetta comet, 67P/CG. This requiired modifying the Ulysses orbit subroutine ulyssesorbit.f to comet67PGCorbit.f (using the current JPL ephemerdies to locate the comet) extractpositionn8.f (to include element 16, and the main program:) ipstd_20n_inp_mag_v14b.f, and makefile The progran seems to work to attempt to provide an extraction at the right distance for comet 67P/CG. On 07/04/2014 I increased the outer limit NMAP to 41 (from 31) so that now the tomography allows an extension of the nv3h files and an extraction of the comet parameters to 4.0 AU (because the comet is now at 3.7 AU). This too seems to work including the extraction of magnetic field components at the position of the comet. On 07/29/2014 I was finally able to get the the ipstd_20n_inp_mag_v14_mex.f operating to use both the STELab and the MEXART data for both g-level and velocity data using the "general" IPS input format. On 09/11/2014 I added another version ipstd_20n_inp_mag_v14c.f to makefile. This is the v14 version of the program that I have been working on since Jeju 8/16/2104 that is supposed to allow reads of the IPS Nagoya and Jeju data together, and go to 4 AU, too many mods, and it now does not work to give outputs. Thus there are four versions of the program time-dependent tomography program ipstd_20n_inp_mag_v14.f (The standard to be exported last worked on on 02/20/2014) ipstd_20n_inp_mag_v14a.f (supposed to be used to allow in-situ inputs from STEREO) ipstd_20n_inp_mag_v14b.f (allows the kinematic model to provide output to comet 76P/GC at 4 AU) ipstd_20n_inp_mag_v14c.f (supposed to be used to allow reads of the ips data from measurements of RA and dec in the standard format) On 09/11/2014 I got the ipstd_20n_inp_mag_v14b.f program to provide output to comet 76P/GC at 4 AU as before On 09/12/2014 I set up the ipstd_20n_inp_mag_v14c.f program to read KSWC data from the general file produced for me from Jonghyuk from 2014 that has data until April 22, 2014. As I remember back in August, the file needed editing to make sources with negative first zero Dec's readable. The program not works to read 2147 CR KSWC data from 2014 by typing ./ipstd_20n_inp_mag_v14c_intel gen=KSWC.[4],~/ The program still bombs after reading all the sources in the file claiming there are none. On 09/16/2014 I got the ipstd_20n_inp_mag_v14c.f program to read the KSWC data from the general file. The readvipsn8g1.f subroutine would not allow either density or velocities sources to pass where the velocities were zero. The velocities were not zero, but the program did not know this from the save set values. This is now fixed so that this restriction no longer applies. The program still bombs, however, because some zero values remain as the the program goes into fixmodel for density! I found this, as once before because the MkLOSWeightsmm.f suproutine was not correctly stated in the main program. On 09/16/2014 with ipstd_20n_inp_mag_v14c.f now fixed I then ran the program on CR2147 (March 2014)on the KSWC data for that period. The results for velocity are terrible. On 09/25/2014 I again tried to run the ipstd_20n_inp_mag_v14b.f program to determine observations at comet 76P/GC. This program bombed in to places, one in trying to read the general data and 2, at the end becuase the analysis was not carried out to 4 AU. I learned that the readvipsn8g1.f routine does not work well, and needs to be modified much as the earlier routine readvipsn8.f routine that does work, especially to go from one year to the next. On 09/25/2014 I began to modify the readvipsn8g1.f routine so that is looks more similar to the readvipsn8.f routine. I call this: readvipsn8_2.f and modified the main program to accept this subroutine. On 09/29/2014 I discovered that the old readvipsn8g1.f routine probably did work to go from one year to the next, but that there was a bad print statement in the wrong place. Anyway, the program does now read the data from one year file for either KSWC data or Nagoya data. On 09/29/2014 I decided to begin the system to read data from more than one instrument and to sort these data. This program should be: ipstd_20n_inp_mag_v14d.f On 09/30/2014 I was able to get the program to read files from two different IPS stations one after the other. Now all I need is a sort routine (needed anyway in case there are bad times) to place all the sources in order. It reads files in the General format, as many as are input on the command line. The changes to ipstd_20n_inp_mag_v14d.f are all in the first portion up until the end of the reads. The few changes in readvipsn8_2.f should serve as well in other programs to read both General and regular files. I haven't tried regular, however, and should probably throw out all entries except the general one. On 10/01-10/04/2014 I worked on getting a sort routine in place and the reads to output from the input sources. This all now seems to work now in the ipstd_20n_inp_mag_v14d.f routine. The sort routine is called: sort_IPS_data.f It calls another subroutine: sortr8.f Thus, at long last I have an IPS tomography program that can read data from more than one observatory and sort these into a continuous set of data for the tomography program. The more than one observatory are specified on the command line as: ./ipstd_20n_inp_mag_v14d_intel gen=nagoya,~/ gen=KSWC,~/ nso_noaa=$DAT/map/nso_ktpk/hcss/nso_ktpk[4]_[3].fts The program needs to be told these are General inputs, and the input data in readvipsn8 can only be read in the general format. On 10/06/2014 I found I could not read both nagoya and MEXART files together, and so I found I needed to allow all of the last ist values to be OK in the function ireadgen8_2 in readvipsn8_2.f routine. This did not work because the external entry was not correct in the main program. Finding this out and then fixing the problem was a major hasstle that took me until 10/07/2014. Now the reads of any 2 different files works, and the MEXART data are supposed to now be processed with a larger value (0.2 arc sec) for the source line-of-sight weighting. If there were 3 or more sets of input data these should also work if written in the General format. The readvipsn8_2.f subroutine can only read data from the General format file. I removed all other reads from this subroutine. On 11/11/2014 I created a new main program ipstd_20n_inp_mag_v14e.f that is intended to be used to read and use the newly-determined files that potentially give a Bz component of field using a revised version of XuePu's CSSS program. To work, the program needs to read at least the phi component of magnetic field and deal with it correctly. To do this should require only a change to extractdvdm.f to output the phi field that masquarades as a radial field at the source surface. I have thus altered the subroutine extractdvdm.f to be named extractdvdmp.f and to write out the br field in the bp location in the correct place. On 11/12/2014 I found that in running the v14e program using the nagoya=nagoya data I found the program would not work to get past the input that asks to input nagoya or general data. This began a revision process that now fixes this in the main program so that both the general or regular data can be accessed by the same main program. With this finished I have now attempted to obtain the first real analyses of Bn with the data. So far in 2007 there appears to be approximately a 4-day lag in the tomography-derived model using gong Bn data. The ace in-situ Bn data with the diurnal variation is identical to before. The tomography results by about 4 days. This might be due to the time it takes the magnetic field to progress to 15 Rs (but I doubt this). If so, the correlation would be very high. The correlation is slightly positive as is. In 2011 the correlation with gong data is poor, but then so is the long duration correlation with the Br and Bt data using gong data realtive to in-situ. After running all three rotations, I find none give a high correlation, with the two later dates actually giving variations of about the correct magnitude, but just not good correlations, I wonder if the original premis is correct, or if I am doing something wrong with XuePue's program. On 12/09/2014 I found to my dismay ** that the file with the good IPS data points output could not be read as a nagoya.2014 file by Paul's IDL program to read the data points to plot. This will need to be attended to when multiple sites are used to provide data. I needed to hand edit Julio's data file from the original to get the IDL program to work to input and plot points after running ./ipstd_20n_inp_mag_v14d_intel using the general input. I also discovered that I could not run the ./ipstd_20n_inp_mag_v14d_intel program from the current directory by typing ./ipstd_20n_inp_mag_v14d_intel gen=nagoya,./ gen=MEXART,./ nso_noaa=$DAT/map/nso_ktpk/hcss/nso_ktpk[4]_[3].fts on the command line. I think this is because the gen input has an 80 character limit to what can be used in the full input using gen=. I needed to place the nagoya.2014 and MEXART.2014 files in the main directory bjackson and use a gen=nagoya,~/ to access them. On 1/10/2015 I changed the ipstd_20n_inp_mag_v14e.f version of the program so that it will accept data from a period of one rotation beyond the current date read from the computer in the non-forecast mode. I also changed the print out to indicate the fraction of the rotation entered (if in fact a rotation fraction is entered). On 1/14/2015 the ipstd_20n_inp_mag_v14e.f version of the program at that time was made into an ipstd_10n_inp_mag_v14e.f version in order to provide normal magnetic fields at this higher cadence from STELab data for use with a 2015 data set. This idea worked and provided a higher cadence data set from the STELab tomography, and it also showed that with a positive lag shift, the normal analysis matched the insitu ACE normal mag analysis well for a period in early January 2015. At the time there was no Wind data to compare with, and the tomography was then used with a a match to the ACE in-situ data. The analysis using this program requires that the g-level and velocity spatial filter be set to 7.0 (half its current at 14.0), and the temporal filter be set to 0.325 (half its current at 0.65) since these are not Ooty data where this is now done automatically and set to 8.0 and 0.45, respectively On 2/25/2015 the ipstd_20n_inp_mag_v14e.f version of the IPSTD program works to provide the analysis of the nso_nsp normal files to provide Bn, the normal component of the field. This works as with the current ApJ letters paper to provide an r^-1.34 expansion outward from 1.6 Rs. Now needed is to include both the tangential and radial components of this field in the same way in an automatic analysis of the closed component from Br and Bt from the source surface at 15 Rs. An addition to the current ipstd_20n_inp_mag_v14e.f write3d_bbtmpe.f and extractdvmp.f programs would be nice to have to do this automatically. On 2/25/2015 I began the work of the revision of the ipstd_20n_inp_mag_v14e.f program so that it would allow a read of all three components of the closed field plus the normal CSSS model field. The new program to do this is called: ipstd_20n_inp_mag_v15a.f and is an extension of the ipstd_20n_inp_mag_v14e.f main program. On 2/26/2015 I continued changes to the main program and to several subroutines to be able to pick up the closed files in a automatic way. So far there have been changes made to: Makefile ipstd_20n_inp_mag_v15a.f (main program) bfield_get.f (now bfield_getn.f) b3d_param.h (now b3dn_param.h) write3d_bbtmpe.f (now write3d_bbtrrtp.f) On 3/24/2015 the ipstd_20n_inp_mag_v15a.f file was closed by mistake. There may have been some modifications to the file at that time and following work on it earlier. On 3/3/2015 I began work on a program ipstd_20n_inp_mag_v15ma.f, an extension of the ipstd_20n_inp_mag_v14e.f program, I believe. This new program provides an input from an external file and then provides a traceback from that file. Modifications to the main program have been placed at its end to import the file after the tomography has iterated, and for tests to provide the traceback from the traceback from the existing file. Besides modification to the main program, there has been significant work on the subroutine: mkshiftdnma.f On 3/28/2015 the mkshiftdnma.f subroutine was shown to work correctly. Modifications to this subroutine may still be needed to provide a traceback to the magnetic field surface below the input volumetric data in order to extend the lowest traceback velocities to this level in the best possible way. On 9/2/2015 I renamed the program; ipstd_20n_inp_mag_v14d.f to ipstd_20n_inp_mag_v15d.f This program allows reads from multiple sites using the standard file input using the input line: ./ipstd_20n_inp_mag_v15d_intel gen=nagoya,./ gen=MEXART,./ nso_noaa=$DAT/map/nso_ktpk/hcss/nso_ktpk[4]_[3].fts Both the Nagoya standard input and the Mexart standard input needs to be placed into the current directory where the program is run to do this. A test of this program is available using data avaliable from late 2014 from Carrington rotation 2156.2 (beginning about 19 October 2014) or in Forecast mode, a date of 2014/11/11 (MJD 56972) will work since the last MEXART observation is on 2014/11/10. On 9/2/2015 I tried to use only one input - Nagoya and this seemed to give the correct number of sources - 22 less for both speed and density. I also tried running the above in forecast mode, and this also seems to work. On 9/2/2015 I fixed a check to see that there was a general input path specified, and tried to run the program with nagoya=nagoya,,daily and this also worked to read the regular old nagoya file. This seems the best of all possible sets of program options. On 9/2/2015 I discovered that I needed to increase the character size of cfile in readvipsn8_2.f from cfile*80 to cfile*100 to match the input wild parameter and allow twenty more locations to be input. The values on cWildNagoya and CWildOoty in the main program also needed to be changed to *100 characters, and the cfile in readvipsn8.f needed to be changed from cfile*80 from cfile*100 On 9/2/2015 I began to modify writegoodsourcegv. The new version for the general format is called writegoodsources. On 9/3/2015 I continued with modifications to write out good sources from the general format. To do this I decided the best way was to modify the read subroutine to provide a 66 character input to writegoodsources.f :that is carried through the program, thus in: readvipsn8_2.f I modified cSRCVGe to include the various parameters that are in the original nagoya.xxxx file with the exception of the middle parameters. These are now replaced by RA and Dec, a site value S for STELab, M for MEXART, and a value for the size of the source now defaulted to 0.200 for MEXART and 0.100 for STELab. The file of goodsources written by writegoodsources.f is called goodsources.txt. On 9/3/2015 I discovered that the goodsources.txt of negoya.xxxx file has bad information in the columns that provide the date for the in-situ data. This is now fixed by replacing a line in the reads of the read in-situ routines - all of them. On 9/3/2015 I ran both the ./ipstd_20n_inp_mag_v14_intel program on CR 2156.2 and the ./ipstd_20n_inp_mag_v15d_intel programs with nagoya=nagoya,,daily and with gen=nagoya,./, and with gen=nagoya,./ gen=MEXART,./ These were all done in /test/Gen_input/2156.2/ The matches with the magnetic field are terrible (-correlations). The comparisons with d and v not so good. However, The two programs run on the same rotation with nagoya=nagoya,,/daily are identical with the same numbers of sources. The magnetic field too, gives the exact same results. The ./ipstd_20n_inp_mag_v15d_intel program with gen=nagoya,./ has 9 more G sources and 1 fewer V source and both give slightly poorer correlations than the old program, but the time series look very similar. Mag fields similar. The ./ipstd_20n_inp_mag_v15d_intel program with gen=nagoya,./ gen=MEXART,./ has 31 more G sources and 21 more V sources (22 MEXART sources). the densities give better correlations and the velocities somewhat poorer coorrelations. Mag fields, are similar. On 9/3/2015 I ran both the ./ipstd_20n_inp_mag_v14_intel program and the ./ipstd_20n_inp_mag_v15d_intel programs in forecast mode on MJD 56972. Here, the time series with the MEXART data have somewhat higher correlations for both density and velocity when the program is run in forecast mode. In either the regular or forecast mode cases, the program works, and now the version that allows the general format works, and the writegoodsources program works. Thus, this program is the one to use to advance other issues and to run for most cases. Discontinue the ./ipstd_20n_inp_mag_v14_intel program. Discontinue the ./ipstd_20n_inp_mag_v14c_intel program. The ./ipstd_20n_inp_mag_v15d_intel program still needs a few modifications installed since the v14b (with STEREO), and v14e (closed field propagation) programs have been worked on. On 9/4/2015, I decided to rename the ./ipstd_20n_inp_mag_v15d.f program to: ./ipstd_20n_inp_mag_v15.f and thus modify this for future export to colleagues and to provide an updated IPS analysis. On 9/18/2015, I began the process of attempting to make the tomography program more immune to problems when there are too few radio sources to the east of the Sun. To do this, I now provide a program: ./ipstd_20n_inp_mag_v15c.f to modify to help promote corotation more. (Program version v14c was used to provide a many-site tomography program that is now v15) Hopefully this won't take too much work, but I will need to find where the use of few sources dominates the false speed forecast and then modify the program to hopefully fix this result. From archived forecast runs there are locations with poor forecasts at: 56940.8 (2014/10/10 20UT) 56944.8 (2014/10/15 18UT) (2014/10/16 19UT) (2014/10/19 18UT) (2014/10/20 19UT) (2014/10/24 19UT) On 10/26/2015 Hsiu-Shan Yu said that the GONG data files could not be read beacuse of an error. The error said there were too many GONG data files in called by blistall. The subroutine bistall.f is given an nfilemax input parameter from mapreadsingle.f or mapreadtimes.f of 100, and this has now been changed in both of these subroutines to 200. With more GONG files than one every 6 hours per Carrington rotation input this will need to be increased. On 11/05/2015 I made a ./ipstd_10n_inp_mag_v15.f version of the program. ************************************************ ************************************************ On about 11/01/2015 I began again to work on the tomography that includes a 3D-MHD kernel. Because this version of the time-dependent tomography is so different, I decided to begin another version of the readme files to document this. The original version worked-on earlier this year from about 02/01/2015 to 04/01/2015 was called: ipstd_20n_inp_mag_v15ma.f This program set up the tomography program to provide an externally-read file using the internal tomography data set at the end of its 18 iterations and then used the data set to extrapolate downward from outer levels to make the traceback routine. The new traceback subroutine is called: mkshiftdnma.f On about 11/01/2015 I worked on two subroutines called: externalw.f and externalr.f These write and read data files prepared for the tomography program from an outside source. The externalw.f program is unnecessary except totest the read routine, and to check to make sure that what is input from the external source is presented and can be related to the tomography program accurately and patched into the tomography program without changing the analysis much. On 11/13/2015 the externalw.f and externalr.f subroutines now work correctly to provide inputs from ENLIL (and probably other 3D-MHD models. On 11/13/2015 to provide a tomography program that uses the lowest input 3D-MHD level to then obtain magnetic fields from 15 Rs, I needed to modify the subroutine write3d_bbtm4D that outputs magnetic fields in 3D over time, to write3d_bbtm4D_M This version is supposed to allow a step in height from below the lowest tomography surface. To provide this option, it looks as if the included values of: t3d_grid_fnc.h need to be modified to: t3d_grid_fnc_M How this can be done I do not know. On 11/16/2015 I provided a write3d_bbtm4D_M subroutine that includes two new small functions at its end called: RRvalue(I) and RRindx(R) These two functions replace the RRvalue(I) and RRindx(R) functions in the t3d_grid_fnc.h include file that is now called: t3d_grid_fnc_M.h The new include file has these two functions commented out. By 11/23/2015 I had gotten the write3d_bbtm4D_M subroutine to work and written a subroutine x3dtshift.f to shift the shift matrix so that it approximately shifts the values from the tomography program output into a new matrix that has a step size at the bottom that is only the RAD1-RADMS interval to correctly shift magnetic fields from the magnetic surface RADMS. Although this subroutine seems to work, I cannot get it to pass the XC3DT shift matrix to the write3d_bbtm4D_M.f subroutine correctly through the main program even though I now have the write3d_bbtm4D_M.f passing the BB3 file out of this subroutine correctly. On 11/23/2015 I found an error in write3d_infotd3dMM.f subroutine. It was passing the XC shift out instead of the time shift in the third matrix location. This does not effect the XC3DT passage into the main program. On 12/5/2015 I got the write3d_bbtm4D_M.f program to work correctly with Paul's system of the program locd as an included function. The trick was to provide locd with the value of nTmaxG instead of nT3d, and then to use the value of XCTD, the shift matrix in time, as a Carrington variable even though the write3d_infotd3dMM.f subroutine passes the xctd matrix third shift out as a doy variable. The program was checked through to the end of the output after the write3d_bbtm4D_M.f suproutine. However, now when the test files are produced that are supposed to be from the ENLIL inputs, the density and velocities appear OK, but the temperature, and the 3-component magnetic fields do not appear correct, and thus this output now needs to be fixed. On 02/11/2016 I provided a new program: ipstd_20n_inp_mag3_v16mhd.f ************************* I now (in the next lines) put in the changes made up until I began again to work on the ipstd_20n_inp_mag3_v16mhd.f again on 9/10/2016 The new 3D_MHD program is called: ipstd_20n_inp_mag3_v16a_mhd.f On 2/10/2016 I finally resolved the problems I have been having with the ./ipstd_20n_inp_mag3_v16.f program. The problem was mostly in the writ3d_bbtm_3.f subroutine. First of all, the earliest fix that I made to provide a modified input to the arrays written out gave the same values as before when I simply used the r^-1.34 to decrease the value at the source surface. Second, whether or not the normal field in jacked up by nothing or the the fall-off to be compatable with the r^-1 fall off also gives the same answer at Earth if there is no fall-off impressed onto the jacked up value. Thus, I used the appropriate value at the source surface and allowed the r^-1.34 fall-off in the extract routine. Third, I found why the SOLIS data did not provide answers and the GONG data did. This had to do with the initiation of the flags just before going into the write_bbtm_3.f subroutine IBB=1,4 loop. Initiating these flags in the loop only works on the last group of files that are read. Finally when everything worked, I then discovered why the normal fields never did provide as good a correlation as in Hsiu-Shan's and my earlier tests. In the best tests we used an ./ipstd_10n program. I modified the ./ipstd_20n_inp_mag3_v16.f program to ./ipstd_10n_inp_mag3_v16.f and then tried the analysis with both the fields using a spatial and temporal filter like that of the 20n program and like the 10n program should be. The 10n program run using 20n filters works the best for the 2056 normal field, but not the radial and tangential fields, by the way. Actually the radial and tangential fields give highest correlations with the 20n program run as it should be. Thus, I learned something. On 02/21/2016 I altered the main program to provide or not provide magnetic field 3D files. In this way, the only thing that needs to be produced is the e3 file. On about 02/24/2016 there were several problems that were noted that came up in the analysis. On about 02/24/2016 the main problem from the above was that the magnetic fields are not yet being made correctly in the current files system. After a few heights there are no longer any fields. It is unknown why Write3D_bbtm_3.f does not write out the magnetic files correctly at different heights. There was also a problem with the IDL program that read these three component files, but this was reportedly fixed by Paul, since Hsiu-Shan seems to see that the files are now read correctly to give boundaries. On about 02/24/2016 (at the same time) it was discovered that the 20n 16 version of the program did not provide density and velocity 3D files correctly. This was traced with a few days work to the aips_wfmm.f function routine that uses the radio frequencies and source sizes from the standard input file. The function name was not updated from aips_wfm to aips_wfmm and this quickly fixed this problem so that now densities are given corectly for the 20n v16 of the tomography program. It was then discovered that the 10n version of this same program, and for that matter, the v14 of the tomography program did not converge properly for CR 2153. Densities converge correctly, but not velocities for this rotation, but this does not always happen. It seems as if the in-situ measurements are not being counted or used in these analyses. On about 02/24/2016 - 02/28/2016 I scoped out why this analysis did not converge. I found many things but still not the reason that the 10n version does not converge. The 10 n version sets NGREST = 2 NVREST = 2 NGRESS = 2 NVRESS = 2 NF = 1 With this new parameter set all spatial and temporal filters are now automatically set to half their value as a default and the tomography continues. I found the problem occurrs before 2887, the end of the iteration loop, and probably after 2326, the beginning of the iteration loop. I found that changing NF from 2 to 1 did not cause the problem with the 20n version. I tried the 10n version with all filters set back to the 20n version, and this worked fine, it only changed the name, as it should. I found that no matter the filter setting (setting filters back to twice their default value, and the same value as the 20n version) didn't fix the problem. I tried to selectively set each of only one of the above filters to 1 (all others 2) NVREST, NVRESS = 1, NGRESS = 1, NGREST = 1 did not change things. I tried all = 1, and NGREST = 2, filters at 14 and 0.65, and got densities real good, and velocities OK. I tried all = 1, and NVREST = 2, filters at 14 and 0.65, and I got a bad Vtmp all heights, all iterations. Densities were good, velocities bad. The error was before mkshiftd and after G level mean change. It does not seem to work to have NVREST greater than NGREST. So now set both NVREST and NGREST =2 NGRESS and NVRESS = 1, and when this is done (filters at 14 and 0.325, or 14 and 0.65) this gives no errors, but the velocities still seem to be bad. Thus, I presume the problem is with NVREST = 2 Because the program is correct no matter the value of CONVT (since the program goes bad whatever the value of CONVT that is set), the nmost suspect cause is either the value of aNdayV set by NVREST or in something else. aNdayV is used in the main program in the subroutine timesmooth.f and also in the subroutine MkVMaptdn0n_in.f where it is also used in the subroutine timesmooth.f. Thus I suspect I had better look at the timesmooth.f subroutine to see if there is some improper clipping done in this subroutine. On 03/10/2016 I found that the timesmooth in MkVMaptdn0n_in.f first fills in ANMAP, the numbers of lines of sight in a map, completely with gridsphere and then timesmooths ANMAP filling in only the holes that are already accessed. This is modified by CONVT and aNdayV. When I changed aNdayV to 1.0 from its 10n set value of 0.5, the speed fits in-situ become almost acceptable again. I am not sure if this is because the aNdayV values outside the MkVMaptdn0n_in.f loop do this or the ones inside. All timesmooth values within the loop 2326 to 2887 outside of MkVMaptdn0n_in.f only smooth accessed values. On 03/10/2016 I suspect the timesmooth filter is used correctly to filter the data timesmooth uses CONVT as a filter that is halved. Thus, if CONVT is divided by half in the 10n version, the filter will remain the same size in units of NT as it should. The usual was to timesmooth only valid values of ANMAP after gridsphere within MkVMaptdn0n_in.f. This did not allow a 2153 fit to in situ. Inside MkVMaptdn0n_in.f, when I timesmooth all values of ANMAP within the volume this changes things considerably. The fits on every other day when WTSM alone is all smoothed are excellent, while the fits on alternate days are poor with jagged decreases from the average. If I timesmooth all values of ANMAP, the fits are poor with jagged increases. (ace3) If I timesmooth all values of ANMAP, WTSM, every other day is fit, while the fits on alternate days are poor with jagged decreases. If I timesmooth all values of ANMAP, WTSM, and FIXM, the fits then are smooth, but seem to follow the IPS speeds and not the in-situ ones. When I first timesmooth all ANMAP values accessed or not and then apply gridsphere inside MkVMaptdn0n_in.f, the fits to the in-situ values sort of work, but are jagged (ace3). When I first timesmooth all ANMAP and WTSM values accessed or not and then apply gridsphere inside MkVMaptdn0n_in.f, the fits to the in-situ values sort of work, but every other day is not fit in a decrease from average. When I finally timesmooth all values of ANMAP, WTSM, and FIXM prior to gridsphere inside MkVMaptdn0n_in.f then the fits are poor, and probably again follow the IPS speeds. I went back to the original and then: If I timesmooth all values of ANMAP, inside MkVMaptdn0n_in.f and timesmooth in the main prog. the fits are poor with jagged increases. (ahea) (The first idea of just smoothing ANMAP inside MkVMaptdn0n_in.f gives better results. On 03/10/2016 I realized that I was using the 10n program with NGRESS = 1, and NVRESS = 1. Oh ugg, the above may thus be invalid. Let's try the normal 10n version. If I timesmooth only valid values of ANMAP, the fits are poor and above the in situ. (ake3 - corr = 0.301) If I timesmooth all values of ANMAP, the fits are poor with jagged increases above the in situ. (ale3 - corr = 0.301) If I timesmooth all values of ANMAP and WTSM, the fits are very poor with jagged decreases below the in situ. (ame3 - corr = -0.017) On 03/10/2016 I realized I better try something else. Let's lower the # of V and G points that constitute a LOS crossing according to the resolution. When I halved the LOS crossings, this gives a better result. (ane3 - corr = 0.393) When I halved the LOS crossings and increased CONDT to 0.64 this gives a better result. (aoe3 - corr = 0.479, but still not good enough) On 03/11/2016 I decided I should see if the MkVMaptdn0n_in.f routine was really doing what it was supposed to for the inversion. I produced a MkVMaptdn0n_inm.f subroutine that did not count the IPS LOS equally in ANMAP. When i did that, the program used the in-situ values to dominate the results (the nv3h files only showed what was happening near the Earth, but the in-situ comparisons were bad. My speculation about this is that when done, gridsphere smears out the result of the IPS observations into those of the in-situ, and then the final production of nv3h* files does not allow the IPS to exsist. **** I then eliminated any in-situ velocities in the tomography by requiring too large a number of in-situ values (600) into the mix using the standard MkVMaptdn0n_in.f subroutine. The normal amount was something like 420. When this was done, the tomography fit the in situ almost perfectly corr = 0.782 (a small period 8/09 - 8/12 did not), but otherwise the fit was perfect! (ape3) The MkVMaptdn0n_inm.f subroutine also provides the very same fit when I eliminated any in-situ velocities in the tomography by requiring too large a number of in-situ values (600) into the mix. This allows the text output of to show why the MkVMaptdn0n_inm.f routine works so well when there are no inputs to the IPS. What this shows is that the "extra" values of IPS measurements produce a tapering of the values of ANMAP over the time of the observations of the in-situ and this negates the values of the in-situ fixes at these times. When no IPS velocities are present, all the extra values at times near the in situ values are typed "bad". On 03/14/2016 When timesmooth is used to force only in situ to be fixed near in time to the in situ after the gridsphere then corr = 0.462 (are3) When timesmooth before gridsphere is then used to force only in situ to be fixed near in time and space to the in situ then corr = 0.876! The above is very good! (see ase3) With the above the densities are cor = 0.877 (also ase3), but not as good someplaces as before. Here we can try the same with the densities. Now I will try to see if the 10n version does as good in the 20n version. The old 20n version gave corr = 0.909 and corr= 0.856 The 20n version has an extremely good density corr = 0.944, the velocity, however is rather poor corr = 0.505 - smooth and good though. (ate3) If I use the 20n version with gridsphere at half the normal CONR the density corr = 0.943 and the velocity corr = 0.707, much better. (aue3) If I use the 20n version with gridsphere at half the normal CONR and timesmooth with half the normal CONT then, density corr = 0.959 and speed corr = 0.785 Now let's see what the 10n version does. (ave3) If I use the 10n version with gridsphere at half the normal CONR and timesmooth with half the normal CONT then, density corr = 0.932 and speed corr = 0.644. (awe3) The density TS has more structure, but corelations are poor. The speed is not as good with some structure now manufactured where it shouldn't be. If I use the same 10n matched with wind at delt = 0.5 the Wind TS resolution is higher, but the corr = 0.907 corr = 0.630. Now lets try the 10n version with only gridsphere at half the normal CONR and timesmooth with the regular CONT and see what happens. If I use the 10n version with gridsphere at half the normal CONR and timesmooth with the normal CONT then, density corr = 0.940 and speed corr = 0.271. (axe3) This is dissapointing. The velocity has much structure, but with some things where they should not be. On 03/14/2016 I can summarize. The best values for CR 2153 for both the 20n and 10n versions seem to be values where the normal values of CONR are used in gridsphere and the normal values of CONT are used in the MkVMaptdn0n_inm.f subroutine. For this and 20n density corr = 0.944 and velocity corr = 0.505 but smooth and good. (ate3) For this and 10n density corr = 0.877 and velocity corr = 0.876 (ase3) Let's try now to see if doing the same thing for a subroutine MkDMaptdn0n_inm.f will work better or worse. Oh my! Somehow the same thing for MkDMaptdn0n_inm.f did not work. When I did this first, I got the analysis: On 03/25/2016 I found the problem above. I was not zeroing the WTSM matrix at the beginning of the subroutine MkDMaptdn0n_inm.f. This added in some bad numbers into the inversion routine on the latitude of the in-situ measurements, and produced an error. Now when I ran the 10n program both the density and velocity fit like a glove, and the correlations for were, 0.970 density and 0.901 speed (1.0 day in-situ smoothing), and 0.979 density and 0.901 speed (0.5 day in-situ smoothing). (aaae3) Unfortunately, the 20n version using the same technique did not work as well as before with correlation values of 0.710 density and 0.712 speed (1.0 day in-situ smoothing. The tomography produced shallow excursions in comparison to in-situ. (aabe3) Providing a more less-filtered timesmooth by a factor of 2 in the MkMaptdn0n_inm.f subroutines helped somewhat, and the values are now 0.792 density and 0.713 speed (1.0 day in-situ smoothing. (aace3) Providing a more less-filtered timesmooth by a factor of 2 in the MkMaptdn0n_inm.f subroutines and a less filtered gridsphere things went bonkers much with a value of 0.457 density and 0.667.(aade3) When I tried the MkMaptdn0n_inm.f subroutines essentially turing off the timesmooth by multiplying CONT by 0.1 the values were, 0.794 density and 0.624 speed (1.0 day in-situ smoothing. (aaee3) This was good but not as good as the regular whinc was: 0.909 and 0.856. Thus, I think I will need to give up and make a switch in the program that uses the usual version of the MkMaptdn0n_inm.f subroutines for the 20n program that switches to the new version of the MkMaptdn0n_inm.f subroutines when the program is used for the 10n program. On 03/25/2016 I provided a switch that allows the values of NGREST and NVREST top switch from either the MkVMaptdn0n_in.f and MkDMaptdn0n_in.f subroutines to MkVMaptdn0n_inm.f and MkDMaptdn0n_inm.f subroutines, and this now works to give best values using either of the two resolutions for CR 2153. The values of 10n are and 0.979 density and 0.901 speed (0.5 day in-situ smoothing). (aaae3) The values of 20n are and 0.909 density and 0.856 speed (0.5 day in-situ smoothing). (aafe3) I then at random tried the program to run CR 2151 to see what happens in the 10n version. When this was done, The values of 10n are and 0.984 density and 0.906 speed (0.5 day in-situ smoothing). (aage3) I then at random tried the program to run CR 2056 to see what happens in the 10n version. When this was done with the old nagoya.2007 file, The values of 10n are and 0.873 density and 0.756 speed (0.5 day in-situ smoothing). (aahe3) This was really pretty good, so I think this is a success. Now reamaining is to provide good magnetic fields with this analysis in 3=three components in the 3D volumes. This is not done yet. On 03/26/2016 I moved the program ips_20n_inp_mag3_v16.f to ips_20n_inp_mag3_v16_old.f, as well as its executable, and I modified the ips_10n_inp_mag3_v16.f program to the ips_20n_inp_mag3_v16.f version, and I then compiled and tested to see that this program gave the same answers as the old ips_20n_inp_mag3_v16_old.f program. The program did this including the magnetic field just as before. On 03/29/16 I worked with Oyuki Chang on modifications to the v16 program to provide inputs from MEXART data accurately. The 16n version of the program has been modified some to automatically provide inputs from MEXART and Nagoya together. The changes primarily to the main program have included the iSys automatic fixes to select both STEL and MEXART minimum elongations (11.5 and 21.0 respectively) for g-level and velocity data, and print-outs to insure that the these were read correctly from MEXART data. All five data sources can now be read correctly by the standard format readvipsn8_n.f subroutine now in the gen entry point. These data sources are Ooty - 3, Nagoya - 4, KSWC - 5, MEXART - 6, Puschino - 7. The weight routine uses the input radio frequencies (checked), and the source size (set to appropriate approximation in readvipsn8_n.f) if -999 is read from the standard format. The analysis seems to incorporate these changes allright. On 03/29/16 I also provided an automatic way to profice a factor setting for the g-level settings, since it seemed this was needed to incorporate MEXART data into the 1.0 multiplicative factor used since year one for STELab data. Oyuki learned that the best way to get a good correlation with MEXART time series was to put in a multiplicative factor of 0.5 for MEXART data with only 3 radio sources for CR 2156.7. Corrrelations with in-situ ACE level 0 density data for this CR were better than 0.5 when she did this, and about 0.42 or so for CELIAS densities. Speeds were a terrible 0.2 or so. On printing out the MEXART data, I found that a simple multiplicative factor was not the way to provide optimal reduced-excursion g-levels, and that (g-1)*0.2 + 1 gave proper excursions for MEXART data. For the last several years I have assumed the value of a 1.0 multiplier for Nagoya was correct, and have not checked this. The SP journal article in 2013 uses this with the old Nagoya format data. However, I decided to compare the STELab IPS densities and velocities without in-situ fits just to check to see if the STELab g-levels still worked. **************!!!!!!!!! I found that the density conversion for the current STEL g-levels no longer work correctly, and that the STEL excursions in density are now too small, at least for CR 2156.7. The correlations for CR 2156.7 for STEL density were negative as well. Speed correlations were also negative. Of course STEL with in-situ fits were highly positive for both MEXART and Nagoya data, as always. Now really worried , I tried the standard old Bastille-Day period CR 1964.6 using the old format and found that the 20n_v16 version of the program acceptable with these data. Checks show that both the old and new format STEL inputs for CR 2156.7 give nearly identical (poor correlation) answers of -0.399 (density) and 0.071 (speed) versus ACEL0 for the new format, and -0.356 (density) and 0.036 (speeed) with the /daily files. Thus, somehow something very drastic has happened to the STEL data to give these poor values from STELab with 1985 g-levels and 512 speeds accepted from the old format. Again to check, I tried the v14 of the program and with 1985 g-levels and 512 speeds accepted from the daily file, the 20n_v14 version of the program gave identical answers of -0.356 (density) and 0.036 (speeed). No wonder the forecasts are bad from STEL these days! On 03/30/2016 I fixed a problem with the setting for the fractional use of g-level at the Nagoya site. The Nagoya site was using the default fractional MEXART fraction by mistake. Now the amplitudes of the Nagoya data are the same as those of the old program. The correlations using ISEE data alone are bad, but in any case the same for CR 2156.7 as the old v16 and v14 main program. On 04/27/2016 I began to modify write3d_bbtm_3.f and extractdvdm_3.f to provide accurate results. To do this I created two new subroutines write3d_bbtm_3n.f and extractdvdm_3n.f that have the same inputs, and will be revised until the old versions can be modified. On 05/05/2016 I was able to get these versions to work to give the appropriate normal field values, and to provide all three field components in one nsog* file correctly. To do this, the field file header needed to be fixed to contain the correct power, i.e., _2_1_1.34 and both the write3d_bbtm_3n.f and extractdvdm_3n.f programs needed to be modified. Hsiu-Shan helped to run vu_insitu.pro with Paul's help, and this certified that the field values from the 3D nsog* files were very similar to the values from the e3 file at Earth. The current write3d_bbtm_3n.f subroutine is now fixed so that it will input the Bn fall-off FALLOFFBN = 1.34 On 05/05/2016 there was still a problem in the header. The nsog* file header is not filled if the nson* files were not made. This may not be a problem, but it is disconcerting. I tried to make the header filled by placing a t3D_fill_global into the write3d_bbtm_3n.f subroutine when a null was placed in cPre, but this did not work to fill the headers in the nsog* files. However, I placed the range of latitudes into the main program before the write3d_bbtm_3n.f subroutine, and I was able to get these input. I took a careful look at the header values that were not filled into the nsog* header, and none are needed that were filled-in from the nson* file write. This was confirmed by Hsiu-Shan. Hsiu-Shan also confirmed that the current program works to provide correct amplitude values of the normal field. Thus, I then removed a great deal of the diagnostic print-out of the write3d_bbtm_3n.f subroutine, and I have now labeled the old subroutines old, and removed the _3n from the subroutine name, and removed the write3d_bbtm_3n.f subroutine access from the makefile. All seems to work now. **On 05/12/2016 I found a significant error present since 2012 when I changed the MkLOSWeights.f subroutine for use with inputs of frequency and source size. The values of source size and frequency were input as values along the line of sight rather than according to source value. I fixed this by implimenting a subroutine MkLOSWeightsx.f in place of MkLOSWeightsmm.f The program now works. The wt function was tested when I implimented a routine to provide LOS wts both for Thomson scattering and g-value to show how a column is represented differently when it moves outward toward earth or outward away from it. Whatever the error, it does not seem to have changed the solutions much, but I am still checking. On 06/06/2016 the write3d_bbtm_3.f subroutine does not seem to work to provide nsog* files in the forecast mode. The files are read, and the following extract program works to provide acceptable magnetic fields, but the nsog* files are not filled over the entire period. These are low-resolution files and should be the resolution of the base files, since the files that are filled seem ripply when the IDL displayes them as ecliptic cuts. On 06/16-19/2016 I seperated the write3d_bbtm_3.f subroutine into get_bbtm_3.f to fetch the fields in a seperate subroutine. This seems to work, so far. I have made a new subroutine: get_bbtm_3.f This obtains the magnetic field source surface maps at the cadence of the tomography program. There also was a significant error in the extractdvdm_3.f subroutine. The subroutine input maps magnetic field maps at a cadence of 6 hours, assuming they were at the cadence of the tomography program (24 hours), and interpreted these as such. The answers are pretty much the same even if not exact. Now this is fixed with the current extractdvdm_3.f doing the correct thing. The big news is that no longer is the write3d_infotd3dMM_3.f subroutine needed to produce the TT3D files for the write3d_infotd3dMM_3.f program, and thus there is no longer a need for this subroutine. On 07/16/2016 the current version of the program with magnetic fields produced at the tomography cadence was completed. The write3D_bbtm_HR_3.f subroutine now allows the resolutions of three-component fields at the resolutions of the nv3h* files, and this is a big step in the programming, because it now allows higher-resolution magnetic files to be output so that they no longer have a severe digital variations in height and time. The nsog* (GONG) files from the program were always produced, but for some reason the nsos* (SOLIS) files, although read, did not appear to be written for CR 2056. This was caused (as before from earlier programs) for CR 2056 and CR 2057 from bad header values in the input CSSS-produced SOLIS files. With these files removed from the data stream, the SOLIS inputs can now be output for these rotations. From ~ 07/20/2016 - 08/17/2016, much work was done to provide IDL imaging outputs and certification of the 3 field components (as well as densities and velocity maps), plus synoptic maps of all these components. On 08/17/2016, I fixed the current writegoodsources.f subroutine to provide correct outputs of lines of sight and bad values marked by the program that could be read by the IDL to produce correct sky sweeps with only valid lines of sight. The fix was a simple format change of the subroutine error write. On 09/10/2016 I began again the use the current version of this tomography program to run from an external source. I was able to modify the 16a version of the main program up to line 2849 in 16a_mhd.f to run as the usual 16a program. Now please see the text: readme_ipstd_0_in-situ_3D-MHD.txt for more details. ************************* This has all the benefits of the latest programming for the new general IPS format (or the old format), and also can input the CSSS model and/or 3-component closed magnetic fields as does the ipstd_20n_inp_mag3_v16.f program does at this time. This includes updates to the write3d_bbtm4D_M.f subroutine that is now called: write3d_bbtm4d_3.f This allows an extraction of fields from below the tomography source surface as before, but also from up to four different types of magnetograms, and from both the regular CSSS model and the closed-field component modeling. On 02/12/2016 I updated the main program with a program that now calls the mkshiftdnma.f subroutine. This is called: mkshiftdnma_pre.f, and allows much diagnostic output as well as placing the real call to mkshiftdnma.f inside this subroutine. This allows the main program to be less cluttered. On 02/12/2016 I was able to get the extractdvdm_3 subroutine with up to 4 input components to work in ipstd_20n_inp_mag3_v16mhd.f To do this I needed to make a new routine called, extractdvdmhd_3 This calls a new version of get4dval.f called: get4dval_3.f which calls a new function: rrindxx(R) that sets the index of the height to use when a common block (common RADMSS,RAD11,dRR1) is input with the magnetic field surface (RADMSS), the tomography source surface (RAD11), and the nominal value of dRR (dRR1). On 02/21/2016 I altered the main program to provide or not provide magnetic field 3D files. This is important to provide a debugged program so that now the only thing that needs to be produced is the e3 file. ****************** On 09/10/2016 I began again the use the current version of this tomography program to run from an external source. I was able to modify the 16a version of the main program up to line 2849 in 16a_mhd.f to run as the usual 16a program. Now please see the text: readme_ipstd_0_in-situ_3D-MHD.txt (or the portion loaded above On about 02/11/2016) for more details. The new program is called: ipstd_20n_inp_mag3_v16a_mhd.f On 09/10-15/2016 I settled on a way to provide the 3DMHD inputs and outputs. The first time through with 18 iterations the kinematic tomography will set up to run with a lower base at the height of the 3DMHD model, unless told otherwise. The 3DMHD model inputs and specifications are given up front at the beginning of the run, including output files (nv3m, e3m, nosm, etc.) that are to be output. The kinematic model is all as before with a switch at each location different from the 3DMHD last iteration that tells the tomography program to either produce a kinematic traceback, and ultimately outputs as asked by the kinematic model at the end of the tomography. The outputs can be either for the kinematic model or the 3DMHD model that uses a different traceback system and a base height for the magnetic field inputs that can be different from the height of the tomography base. The magnetic field inputs will be read in a second time at the current magnetic base that the CSSS model (or other) has set up. On 09/14/2016 I decided that it would be best to add a second small shift array to be able to interploate from the magnetic field base to the tomography base. The increments from this can be added to the value shift values in order to go down to the magnetic field base. For the write bbtm files and extract this is necessary in order to get back to the magnetic field surface that is below the tomography surface. Both magnetic field nsog*, etc, and nv3b* files are written out from the tomography surface upward. This requires that write3d_bbtm_HR_3.f and extractdvdm_3.f accomodate these changes. On 09/16/2016 I completed changes to both write3d_bbtm_HR_3.f and extractdvdm_3.f as subroutines: write3d_bbtm_HR_3dmhd.f and extractdvdm_3dmhd.f and hopefully these will work to add the appropriate increment from a new matrix XCshiftM that has only 2 nmap elements. These incorporate the external xcshift matrix, called xcshift3, to distinguish it from the exact same type shift matrix xcshift as well as the parameter RRMS that tells both subroutines that the magnetic field height is different from the tomography height and by how much. I also built a small subroutine to fill the XCshiftM matrix approximately for use with the kinematic model in tests. This subroutine is called: xc3dtshift_rrms.f On 09/16/2016 all three of the above subroutines have compiled sucessfully, and if all works this is a simple method that can now handle the differences in the two source heights RR, and RRMS even in the new mkshift subroutine. However, the output writes have yet to be tested. On 09/18/2016 I modified the write3d_bbtm_HR_3dmhd.f to write out bbb3 files internally that hopefully will work to produce test output files and the write3d_infotd3dM_HR_3.f to be a subroutine to: write3d_infotd3dM_HR_3dmhd.f that will also work to produce test output files. Both of these subroutines now compile. On 09/27/2016 I realized that the subroutine get3dtval.f was just the one needed to access a single Carrington array over time, and that this could supplant the current fix to both the write3d_bbtm_HR_3dmhd.f and extractdvdm_3dmhd.f routines. I thus removed the single bottom zero from the array xcshiftm and now have a 4 element array long, lat, time, 3 element long,lat,time displacement, to provide the trace from the tomography level to the magnetic field source surface. Hooray, this now works for the extractdvdm_3dmhd.f routine, and the write3d_bbtm_HR_3dmhd.f subroutine has been modified but not tested. The xc3dtshift_rrms.f routine had an error in inputs from the main program that is fixed, and it now produces an output file xcshift3 from xcshift if mode=0. The latest compilation of the ipstd_20n_inp_mag3_v16a_mhd_intel program was on 17 October. On 11/20/2016 I realized that the current ipstd_20n_inp_mag3_v16a_mhd_intel program does not have the correct get_bbtm_3.f subroutine installed, and so the current ipstd_20n_inp_mag3_v16a_mhd_intel program was recompiled. On 11/22/2016 I found I had not provided a BBB3 parameter in the write3d_bbtm_HR_3dmhd.f subroutine installed in the main program. This is now done. On 12/01/2016 I found an error in get_bbtm_3.f subroutine. The tangential field component was not typed bad initially. I do not think this gave a problem in the magnetic field analysis, however. On ~12/17/2016 I was finally able to get the subroutine externalwtest.f program to write out test files sucessfully. This required further modifications to the set-up of the internal files used as input both for these tests, and for the tests of these inputs. These inputs are also used to fill in when there are incomplete numbers of input files available read externally. On ~12/20/2016 I realized that the subroutine externalwtest.f that called externalwrite.f was very similar to the subroutine called externalrtest.f that called externalread.f, and over the next week I was able to modify externalwtest.f into a new subroutine: externalrwmhd.f that does both functions. This subroutine first takes the inputs from the kinemaic (or mhd) tomography and tests to see if they are completely provided. It then outputs 3D files over time. These outputs are tested for completeness because they can be used to fill in when there are incomplete numbers of input files available that are read externally. This subroutine then outputs test files using externalwrite.f if this is requested. It will also then read in input files externally using subroutine externalread.f, and overwwrite whatever files have been provided from the previous iteration of the tomography from the files read in. On ~12/28/2016 I was able to finally get the externalrwmhd.f routine to write and read external files. On 01/09/2017 I was able to provide a new subroutine fill_in.f This subroutine is called by externalrwmhd.f for each output 3D file (density, 3-component velocity, 3-component magnetic field, or temperature, and fills in the gaps in the input external files if any exist. It also interpolates and smooths across any bad spots between the input external files and the output files that are present from the tomography program on the earlier iteration. On 01/24/2017 Hsiu-Shan found, and I fixed, a "bug" in the tomography program that caused the year end data from the magnetic field to not be written out. The cause was a incorrect input of the year input to the write_bbtm_HR_3dmhd.f subroutine. The value input should have been, IYRBG rather iYr. Now the tomography program works to provide output files of magnetic field data. I have taken this opportunity to modify makefile to produce a revised program named, ipstd_20n_inp_mag3_v17_mhd.f. This is now made into an executable file. From 01/23/2017 to 02/16/2017 I worked on a program: translate.f This call several other subroutines including a new ones called: enlil_read.f msflukss_readc.f msflukss_readt.f interpolate_enlil.f The main program, translate.f, reads the MHD files from these two programs and then interpolates them into a file that can be read by the UCSD tomography program. In its good form, translate.f interpolates enlil or ms-flukss files written in IHG coordinates at the resolution in longitude 0 - 360 degrees given by the MHD program, but interpolated into the latitude steps of the tomography, and then outputs a file replicated into the 3-rotation structure of the corotating tomography program input. On 02/16/2017, this seemed to work to provide an MS-FLUKSS input file that was successfully read and iterated by the ipstd_20n_inp_mag3_v17_mhd.f program. On 02/21/2017, I provided a facility in the program ipstd_20n_inp_mag3_v17_mhd.f to distinguish between kinematic model source removal and ' MHD model source removal. In this way, the sources removed in the analysis of the Kinematic model can be reinstated in the MHD modeling so that the MHD model gets a fresh start in order to make an estimate from all the sources read using the MHD modeling. Only those sources removed within the iteration loop are labled and reinstated for the MHD iterations, and the modeled sources removed are also listed. For the kinematic model for CR 2014.0, 15 velocity sources of 363 and 1230 IPS and Wind numbers were removed, and 77 of 2048 and 1230 IPS g-levels, and density values were removed. For the MHD model, 21 velocity and 76 glevel-density sources were removed. On 03/10/2017, I fixed the problem of the write3d_info3dM_HR_3dmhd subroutine not writing to the beginning or ends of the times by typing .FALSE. the limiting hooks of bDdener, bVdener, bDverer, bVverer. In the main program some of these were originally set to .TRUE. and when the base analysis was not evoked these .TRUE. flags carried through and limited the values output to the 3D outputs. On 03/17/2017 I was able to get the translate.f program to work to provide good interpolated files from the MS-FLULKSS inputs I was given by Tae Kim. My original interpolate program worked, but had a bug that did not allow a continuous interpolation of the file into the version UCSD needs. I discovered this in checking the input with a reader that provided base outputs from the produced input files. This probably had to do with the beginning and end values of the input files that were not always input depending on the location of the Earth subtracted longitude in the translate.f program subroutine called enlil_read.f Now fixed, the input files read from the transated MS-FLUKSS inputs look a lot like those output from the kinematic run of the program. On 03/17/2017 I again found another bug in the main program. In running the analysis through to provide new base outputs from the mhd iterations, I discovered that the base files produced seem to wrap many times around the Sun as if somehow the xshift file times are interpreted in Carrington variable rather than days. This bug seems to be primarily in the write3d_infotd3dM_HR_3dmhd.f subroutine. Om 3/20/2017 to check this I disabled all the *_pre.f determinations and just ran the kinematic model, and I still get a 3D volumetric write that is incorrect. Thus, something is wrong with the loading of the inputs to the write3d_infotd3dM_HR_3dmhd.f subroutine. On about 4/10/2017 I used the write3d_infotd3dM_HR_3.f subroutine to write the nv3m* output files and found that this worked. Thus the bug that has plagued the tomography is not the second iterative portion of the tomography, but instead it is caused by the write3d_infotd3dM_HR_3dmhd.f subroutine that will need to be rewritten for use to provide a higher-resolution output. Thus, I can now attempt to check the iterative 3D-MHD tomography, and check it. From 4/10/2017 - 5/24/2017 there were many attempts to provide a sensible iteration from the MHD output and MS-FLUKSS inputs. I revised several subroutines including the fixmodeltdn.f subroutine to provide limits if the model lines of sight somehow provide non-numbers. It turns out that this fix was not needed to provide the analysis when the traceback is only provided once before the MHD iterations. The non-mumbers developed when the iterations included changes to the traceback during the MHD iterations following the provision of the MHD volume. In the end I decided to provide a traceback only once from the input volumes, and then maintain this same traceback through subsequent fits of the IPS and in-situ data. When this was done, the volumes iterated to a converged state, the new base density and speeds were provided, and the volumes produced were evenly distributed from 0.1 AU up through 1.0 AU. The first iteration now provides somewhat slower speeds near the ecliptic at 0.1 AU, and higher and spotty speeds near the ecliptic at 1.0 AU. The in-situ speed values have a correlation of 0.736, but are somewhat too high, especially towards the last portion of CR 2114.0. The 3D-MHD iterations to fit the IPS modeling can be varied, but after attempting 8 iterations at the beginning, I noticed that the body variations decreased significantly when I used 18 iterations, as in the kinematic modeling, and so I have tentatively decided to continue this. The MHD iterations go through rapidly because the same traceback is used for each iteration and does not need to be recalculated, and only the fits need to be iterated. On 4/25/2017 I was finally able to get the MS-FLUKSS 3D-MHD program to iterate one time and provide sensible answers, at least in an overall sense even if not to reproduce the velocity and density values at Earth, exactly. I strongly feel that more iterations are necessary to provide better in-situ fits, especially to density. On 5/16/2017 I found that sometime after 4/20/2017 I had made an alteration to the program such that it no longer gives an answer that follows the in-situ data at the beginning of the rotation. Since this is after the time of my altering the program to not replace the bad sources, I suspect the problem lies in my alteration of the fixmodeltdn.f subroutine or a bug introduced at about that time. On 5/16/2017 I found that replacing the old fixmodeltdn.f subroutine into the main program did nothing to change the kinematic model result for the MHD program, and that the kinematic result is still bad. Thus something else that I did must have altered things. I now have two versions of the kinematic runs that are exactly identical up to where sources get thrown out, and then they begin to deviate ever so slightly after. There seems to be a discrepancy in the latter portion of the analysis, but this is almost certainly the cause that the pure kinematic at 0.1 AU gives: Model kinematic run using Nagoya as an input for CR 2114 gives fits D = 0.926 and V = 0.941, and the MHD kinematic gives: MHD Model MS-FLUKSS kinematic run 2114 that gives fits, D = 0.691 and V = 0.654. On 5/17/2017 I found that the MHD V17 values of the usual Kinematic model run provides output files with differences in the two sets of images that were very minor, even though the time series plots are significantly different. This means (probably) that the fault lies with the extract routine, and not the actual data files themselves. On 5/18/2017 I found that all the outputs of the MHD program run to provide MS-FLUKSS kinematic modeling for CR 2114 except the very last where the e3__ files are produced have appropriate excellent correlations with the in-situ. Both V and D almost all above 0.9. This implies that things go wrong at the very last part of the MHD program, and thus again the fault probably lies at the end of the program or perhaps with the last extract routine itself. On 5/18/2017 I found that when the magnetic field analysis was skipped that the extract file produced without the magnetic fields gave excellent in-situ correlations of V and D. However, when the magnetic fields were included, the in-situ correlations of V and D were not good correlations. Since both extract routines use the same input base values of VMAP and DMAP this implies that the fault takes place either in the traceback matrix or the ddfact or dvfact files. On 5/19/2017 I found the problem. When I replaced the traceback matrix XCshift3 with the original from the main program XCshift, then the extract routine gave good answers. The values in XCshiftM remained the same. The fault must be in the very simple program xc3dtshift_rrms. That is something I surely did not expect. The problem was that I had made the xc3dtshift_rrms values of nT, nTmax, and this threw off the xshift3 and xcshiftM array production. I now need to see if there are other places that this effects things. On 5/19/2017 besides removing the nTmax from xc3dtshift_rrms, I also removed a variable TT3D from extractdvdm_3dmhd. This had nothing to do with the answer given by the subroutine. I see that write3d_infotd3dM_HR_3.f inputs nTmax, and that this comes in several places including in the doubly-dimensioned variable XCbe. This routine has been used for a long time, and while I do not expect a problem, it would be good to check. The current analysis now seems to work to give at least the first iteration a fairly good in-situ fit. On about 5/31/2017 I was able to dev On about 5/29/2017 I have noticed that the MHD program does not provide a good answer at the end of the time series for one or two days, and it appears that the tomography should carry on for several days following the end of the Carrington rotation for the MHD to not get into problems. I thus modified the main program to add 4 days to NTV and NTG, and on command (mode = 2 and forecast = .TRUE., the subroutines write_bbtm_HR_3dmhd.f, and write3d_infotd3dM_HR_3.f now provide output 4 days write3d_infotd3dM_HR_3dmhd.f beyond the end of the Carrington rotation. This has been tested and now works. On 6/5/2017 I have now iterated the MS-FLUKSS MHD tomography 6 times and I have found the system does not converge as well as I have liked, and maybe not at all. Thus a new tactic is needed. I tried now to not throw out souces during the IPS MHD iterations, and as far as I can tell this works to provide in-situ data fits as well as before, or maybe even a little better. With that, I now have tried to iterate the MHD tomography and make the analysis go through mkshiftdnmam.f twice on each iteration as in the kinematic iterations. This did not work before when tried with the mby altering the kshiftdnmam.f subroutine that was not correct before, but it does now, and the results of the MHD tomography on the first iteration now come out significantly better than they did when this was not tried. Thus, I expect that I had better proceed in this way in the future with subsequent iterations. To not throw out sources using the current analysis will require that I save the sources thrown out on a middle iteration in a file before again making subsequent iterations, and always typing bad the sources that the MHD threw out in the following iterations. From 06/15/2017 to 07/15/2017 the analysis has been helped considerably because Dusan visited ucsd from 07/24-27/2017. He installed a recent version of ENLIL on Bender, and over the following week made it work to provide good ascii outputs for the tomography program. Prior and during, I shel-scripted the analysis so that with the exception of a password needed to run ENLIL on the soft account on Bender, all works from my account on Bender. With this working one iteration after the next was possible from about 01 July. On 07/15/2017 after considerable work, I was able to get The ENLIL program to iterate reasonablly well with my current. There were several bugs in the tomography program discovered over the last few month. The translate.f program worked for density and for the first velocity volumes, but not for subsequent volumes. The difficulty discovered and fixed in the week prior to today was caused by not transferring the whole nTmax times into the read_ENLIL and other routines. Now the transfer program seems to work appropriately for all vaues of velocity and magnetic field. The week before 07/15/2017 of a more serious nature, and difficult to find, was the fact that the last few read volumetric data files did not allow a complete traceback to the solar surface. This was registered from function FLINT as a very large negative number in the mkshiftdnmam.f subroutine. This caused an error propagated into the nearby surface maps by altering xcshift, through the dvfact and ddfact values. The tomography program would go very strange from one end to the other of the Carrington rotation analysis. By making the local dvfact and ddfact equal 1.0, the problem now seems abated. An additional befnefit to this was that now there are very few densities driven negative in runs of ENLIL, and this is a really positive outcome of a huge effort. On 07/15/2017 many iterations of ENLIL were possible, but the technique thought best to use was to update xcshift and the dvfact and ddfact values on each iteration of the data fits of the MHD tomographic analysis. This was soon discovered to NOT CONVERGE. It is unclear to me why, but it seems that subsequent iterations of the MHD using this technique would first produce very large average speed and density maps, with subsequent small to large values of dvfact/ddfact, or very small density maps with subsequent very large to small values of dvfact/ddfact. These discrepancies became larger and larger with each mhd iteration. On 07/16/2017 to fix the above error, I went back to providing only an initial value of xcshifts and dvfact/ddfact values at the beginning of the MHD iterations. This now seems to not oscillate as before, but perhaps this is simply because there have not been many iterations so far. The second (2) iteration of the ENLIL analysis of 0, 1, 2, 3, 4 (0 is the kinematic run) seems to give excellent comparisons with in-situ and excells even the comparison (r=0.954) of density insitu data over that of the kinematic model. The velocity comparison (r=0.874 now) would also be excellent on this iteration except that the beginning of the Carrington time sequence has a discrepancy. The third iteration gives a somewhat worse comparison (r=0.814 for D and r=0.842 for V). These comparisons are still pretty good, and the density and velociy mapos do not seem to be oscillating, so if the analysis does not diverge, perhaps this is success. If there is a divergence, then one idea is to provide only the dvfact or ddfact values on the at the very beginning of the MHD iteration sequence of 18 fits, and then update the xshifts throughout, or visa-versa. The answer is that the analysis is not exactly converging. Now for iteration 4 r=0.777 D and r=0.848 V, and for iteration 5 at: r=0.756 D, and r=0.847 V. The values of the maps are not noticeably changed, and this is good. On 07/16/2017 - 07/17/2017 I have gone ahead and tried to run the analysis first by not changing the vratio and dratio on any but the beginning of the MHD iterations, and then by not changing only the xcshifts on any but the beginning of the MHD iterations. The former, where vratio and dratio were not changed on subsequent MHD iterations gave the best results on the first iteration by quite a bit (r=0.880 D and r=0.886 V) compared with (r=0.855 D and r=0.817 V) on the first iteration before without doing this. Thus, I will now see if this works to give even higher correlations for subsequent ENLIL iterations. # iteration Density(now) Velocity(now) Density(old) Velocity(old) End D (now) V (now) End D (old) V (old) Iteration 0(kin) r=0.820 r=0.867 r=0.820 r=0.867 0.634 0.2658 0.139 0.005883 0.634 0.2658 0.139 0.005883 Iteration 1 r=0.880 r=0.862 r=0.855 r=0.817 0.650 0.2237 0.145 0.006520 0.658 0.2288 0.146 0.006553 Iteration 2 r=0.936 r=0.826 r=0.954 r=0.874 0.661 0.2253 0.151 0.006661 0.663 0.2305 0.152 0.006407 Iteration 3 r=0.887 r=0.812 r=0.814 r=0.842 0.674 0.2336 0.152 0.006067 0.669 0.2329 0.154 0.006073 Iteration 4 r=0.813 r=0.829 r=0.777 r=0.848 0.675 0.2367 0.151 0.006216 0.679 0.2401 0.156 0.006119 Iteration 5 r=0.883 r=0.806 r=0.756 r=0.847 0.673 0.2325 0.153 0.006155 0.676 0.2386 0.153 0.006032 Thus, from the above I see that there is little difference in these two options, with perhaps the first (old) way being slightly better on Iteration 2, and for this Carrington rotation, superior to the the kinematic result analysis in in-situ comparison. Also noted is the fact that the in-situ correlations go down a little for subsequent rotations, and the body fits are not quite as good for the end MHD iteration. This is somewhat strange, and probably means something else is not quite operating correctly in the tomographic analysis. By the way, on Iteration 4 (now) above ENLIL goes through 17766 steps to finish, but this varies with the analysis, and another iteration provided a finish at 18148 steps On 07/16/2017 I tried again Carrington rotation 2114.0 from Iteration 0 to see that something does not go amiss from the beginning of the analysis using the old way of providing the iterations. Oops, somehow the last zeroth iteration was not as good as it should have been. The values should have been: # iteration Density Velocity End D End V Iteration 0 (kinematic run) r=0.919 r=0.954 0.643 0.2293 0.139 0.004933 Iteration 1 (ENLIL 3-D MHD) r=0.684 r=0.865 0.666 0.2303 0.148 0.005895 Iteration 2 (ENLIL 3-D MHD) r=0.948 r=0.812 0.659 0.2250 0.149 0.006023 Iteration 3 (ENLIL 3-D MHD) r=0.848 r=0.850 0.678 0.2414 0.155 0.006092 Iteration 4 (ENLIL 3-D MHD) r=0.824 r=0.810 0.676 0.2382 0.155 0.006489 Iteration 5 (ENLIL 3-D MHD) r=0.819 r=0.846 0.682 0.2431 0.154 0.005966 Iteration 6 (ENLIL 3-D MHD) r=0.815 r=0.841 0.679 0.2395 0.154 0.006125 Iteration 7 (ENLIL 3-D MHD) r=0.788 r=0.842 0.675 0.2380 0.154 0.006221 Iteration 8 (ENLIL 3-D MHD) r=0.850 r=0.843 0.674 0.2346 0.153 0.006042 Iteration 9 (ENLIL 3-D MHD) r=0.725 r=0.849 0.679 0.2418 0.152 0.006287 Iteration 10 (ENLIL 3-D MHD) r=0.837 r=0.862 0.673 0.2343 0.150 0.006092 The synopsis is that following about iteration 2, the ENLIL fits get worse, but following about iteration 3 they really don't get much worse or better. I suspect that now there might be other things to try, but this will take a lot of work. To check, it might be good to: 1) flip signs of one or the other of the non-radial components to see if there is a sign problem, or 2) remove the magnetic part of the 3-D MHD. 3) smooth the analysis more. 4) provide a factor of 2 higher-resolution tomographic result. On 07/18/2017 I began to try again with the iterative ENLIL and Carrington rotation 2115.0. Hsiu-Shan has been working on this rotation for a long time, and one of the CMEs near the beginning of the rotation has been well-studied. The values for CR2115.0 are: # iteration Density Velocity End D End V Iteration 0 (kinematic run) r=0.928 r=0.956 0.676 0.2310 0.162 0.003933 Iteration 1 (ENLIL 3-D MHD) r=0.781 r=0.681 0.730 0.2672 0.165 0.004072 Iteration 2 (ENLIL 3-D MHD) r=0.519 r=0.891 0.??? 0.???? 0.??? 0.?????? Iteration 3 (ENLIL 3-D MHD) r=0.393 r=0.898 0.??? 0.???? 0.??? 0.?????? Iteration 4 (ENLIL 3-D MHD) r=0.357 r=0.905 0.??? 0.???? 0.??? 0.?????? Iteration 5 (ENLIL 3-D MHD) r=0.412 r=0.858 0.??? 0.???? 0.??? 0.?????? Iteration 6 (ENLIL 3-D MHD) r=0.362 r=0.857 0.??? 0.???? 0.??? 0.?????? Iteration 7 (ENLIL 3-D MHD) r=0.??? r=0.?? 0.??? 0.???? 0.??? 0.?????? Iteration 8 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? Iteration 9 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? Iteration 10 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? On 07/22/2017 I tried to run CR2056. At first this didn't work. The start time in year month day and hour must be 14 days before the time stamp, which is the time of the beginning of the Carrington rotation time. Thus, if the start of the Carrington rotation (2056.0) Dusan says is: 2007 04 26 Then the beginning time of the files produced should be: 2007 04 12 03 ENLIL thinks the start is 26-14 = 12, but the tomography does not do this. It begins at 2007 04 13, and so to read all 48 tomography files the time input to iterate_ENLIL.sh is: 2007 04 13 with 48 files to read However this doesn't work. The tomography program was bombing, and I needed to recompile it to read only 47 files. The first file it thinks it needs is at: 2007 04 13 This begins at one file more than the first file ENLIL produces, and when the tomography program tries to read one more file than necessary (48), the program produces an infinity in the vratio last file, and cannot run. One way to fix this is to input the number of files for the tomography to read at 47 and then the tomography program won't try to read one more file than ENLIL produces, and runs. Probably, hnot using the last file ENLIL produces is OK too. The values for CR2056.0 are: # iteration Density Velocity End D End V Iteration 0 (kinematic run) r=0.909 r=0.916 0.??? 0.???? 0.??? 0.?????? Iteration 1 (ENLIL 3-D MHD) r=0.797 r=0.818 0.??? 0.???? 0.??? 0.?????? Iteration 2 (ENLIL 3-D MHD) r=0.871 r=0.744 0.??? 0.???? 0.??? 0.?????? Iteration 3 (ENLIL 3-D MHD) r=0.877 r=0.854 0.??? 0.???? 0.??? 0.?????? Iteration 4 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? Iteration 5 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? Iteration 6 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? Iteration 7 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? Iteration 8 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? Iteration 9 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? Iteration 10 (ENLIL 3-D MHD) r=0.??? r=0.??? 0.??? 0.???? 0.??? 0.?????? On 03/08/2018 at the end of about 4 months of work, prior to the AGU, over Christmas and following through this year I have managed to check over the iterative ENLIL system and the translate program, and I have again begun to work to provide an better iterative ENLIL and MS-FLUKSS programming. Essentially, I needed to revise the translate program, namely the main program to write out diagnostic checks, and subroutine interpolate_enlil that is now called: interpolate_enlil_nm.f This subroutine is changed significantly from the last version that was used and that did iterate. On 03/08/2018 at the end of about 4 months of work, prior to the AGU, over Christmas and following through this year I have also managed to provide two diagnostic writes in the main program: ipstd_20n_inp_mag3_v17_mhd.f to check that the MHD files were read correctly, and that the files and results produced by the tomography program were actually extrapolated outward from the source surface correctly by the volumetric traceback matrix. These options are now available following the reads of the MHD data using two Yes/No ask questions. The first question allows a simple write that displays the input MHD data in the version way used by the UCSD tomography IDL. This displays the input data using a new subroutine called: write3d_input_3D_MHD_DV.f The second question allows a write of the data using the traceback by running through the tomography program once without doing any interpolation. This program modification was actually done first, and appeared to provide strange results. The strange results showed shocks that disappeared and then reappeared at a higher location, unlike those present in the ENLIL results. However, I discovered, that this same result was present in the MDD data and that the translated volumes both from the MHD made volumetric and those using the traceback from the source surface are esentially identical. The volumes using the non-iterated MHD inputs, never-the-less gave reasonable representations of the in-situ data on the first iteration of ENLIL without iterations. However, over the last few weeks I did find that the conditioning using the MHD base files does not seem to work correctly to provide the file smoothing necessary to make a complete data set throughout the Carrington rotation. This has been a bug in the program from the beginning. I have now provided a fix for this that uses the iterated and filled files following, and skips directly to the base writes for the tomography program. This gave a problem in the last days because although the subroutine write3d_infotd3dM_HR_3.f gave answers, for some reason the inputs in the calling sequence has become corrupted, and there was an extension _18 placed on each output file. I found a work-around for this by making the niterT value = 17 before input to the write3d_infotd3dM_HR_3.f subroutine. This is surely not optimal, but does allow me to check to see that the ENLIL runs converge using the scripting I have devised, and sure-enough the ENLIL tomography now seems to converge pretty well even though the ENLIL input files are only marginally acceptable. Two things are now needed: 1) Find the reason the input for write3d_infotd3dM_HR_3.f is corrupted, and more difficult requiring Dusan's help 2) Provide ENLIL inputs at higher time cadence and resolution so that shocks do not disappear and then appear in the input volumes that are given to and used in the traceback matrix to provide the traceback matrix. Checks to determine that ENLIL really does converge with the new system and CR 2114.0 are underway, and so far give: # iteration Density Velocity Br Bt it-18 End V it-18 End D Iteration 0 (kinematic run) r=0.900 r=0.951 0.739 0.846 0.135 0.004516 0.638 0.2380 Iteration 1 (ENLIL 3-D MHD) r=0.650 r=0.794 0.692 0.683 0.160 0.006835 0.660 0.2380 Iteration 2 (ENLIL 3-D MHD) r=0.819 r=0.919 0.660 0.659 0.143 0.006460 0.636 0.2211 Iteration 3 (ENLIL 3-D MHD) r=0.796 r=0.888 0.669 0.674 0.137 0.006298 0.632 0.2195 Iteration 4 (ENLIL 3-D MHD) r=0.782 r=0.868 0.667 0.665 0.140 0.006763 0.640 0.2250 Iteration 5 (ENLIL 3-D MHD) r=0.781 r=0.846 0.674 0.670 0.138 0.006586 0.638 0.2240 Iteration 6 (ENLIL 3-D MHD) r=0.810 r=0.852 0.670 0.668 0.136 0.006600 0.640 0.2254 Iteration 7 (ENLIL 3-D MHD) r=0.822 r=0.853 0.670 0.670 0.138 0.006594 0.638 0.2250 Iteration 8 (ENLIL 3-D MHD) r=0.790 r=0.845 0.670 0.671 0.138 0.006341 0.644 0.2283 Iteration 9 (ENLIL 3-D MHD) r=0.811 r=0.835 0.669 0.667 0.139 0.006657 0.642 0.2272 Thus, as you can see above, the ENLIL tomography settles down and gives acceptable values for density and velocity correlations. The correlations for Br and Bt are offset such that the IPS analysis leads the values of ACE by about two days. This is not the case for itheration-0, the kinematic analysis for ENLIL. This lead also seems to persist for the measurements of density, from the ENLIL values both mapped in volumetric data and input in later iterations. There are still known bugs in the tomography program. 1) The above conditioning of the tomography is only conditionally acceptable and should be revised (somehow) with the appropriate similar version that is used in the kinematic modeling theat works. 2) The fact that a extension is placed on the files is an error that is now only rectified by a write changing the 18 to 1 before the write3d_infotd3dM_HR_3.f subroutine is entered. This corrupted programming is unacceptable. I now suspect the calling sequence in this subroutine is too long, and some part of the calling sequence is overwritten. 3) The write of the files that show the input data that are traced by the provided volumetric traceback alone now have the extension _1 Gurr! On 03/15-17/2018 I was able to revise the translate.f Fortran program so that it also ran the MS-FLUKSS program. I found that the call to interpolate_enlil was not working due to a corrupted value in the calling sequence. Shortening the calling sequence fixed this problem, but tracking down the error took considerable work. I also found that the files read in at the end were not translated correctly this was caused by the set-up of the arrays ALONG(IJ) and XMAP(IJ). This was fixed by adding a new line in the program if(XMAP0.gt.XCTEST0) XMAP0 = XMAP0 - 1.0*DXF Perhaps if other higher resolution data are used a few more lines here will need to be added. On 03/15-17/2018 Following these fixes, the scripting needed to run the new tomography and to access the IDL from scripts needed to be fixed, but now the scripts for MS-FLUXSS run the translate, tomography, and IDL boundary programs and upload these boundaries to the UCSD cass185 ftp site. On 04/01-8/2018, with Tae Kim's help I was able to iterate MS-FLUKSS to obtain an analysis to be compared with the CR 2114.0 analysis from ENLIL (above). The correlations and least squares fits from the analysis are shown below. Checks to determine that MS-FLUKSS really does converge with the new system and CR 2114.0, and give: # iteration Density Velocity Br Bt it-18 End V it-18 End D -0 D val -0 V val Iteration 0 (kinematic run) r=0.900 r=0.951 0.739 0.846 0.135 0.004516 0.638 0.2380 0 0 Iteration 1 (MS-FLUKSS 3-D MHD) r=0.130 r=0.357 0.750 0.758 0.173 0.007261 0.711 0.2562 195 195 Iteration 2 (MS-FLUKSS 3-D MHD) r=0.331 r=0.477 0.818 0.824 0.183 0.007465 0.747 0.2877 317 317 Iteration 3 (MS-FLUKSS 3-D MHD) r=0.507 r=0.480 0.860 0.829 0.201 0.008015 0.760 0.2874 179 161 Iteration 4 (MS-FLUKSS 3-D MHHD) r=0.445 r=0.329 0.832 0.807 0.196 0.006444 0.788 0.3017 157 157 Iteration 5 (MS-FLUKSS 3-D MHD) r=0.166 r=0.593 0.798 0.736 0.198 0.008179 0.713 0.2489 302 302 Iteration 6 (MS-FLUKSS 3-D MHD) r=0.315 r=0.690 0.804 0.848 0.171 0.006977 0.773 0.3029 182 182 Iteration 7 (MS-FLUKSS 3-D MHD) r=0.580 r=0.446 0.750 0.764 0.202 0.007381 0.762 0.2936 220 220 Iteration 8 (MS-FLUKSS 3-D MHD) r=0.482 r=0.530 0.759 0.822 0.195 0.008115 0.728 0.2647 103 75 Iteration 9 (MS-FLUKSS 3-D MHD) r=0.139 r=0.542 0.795 0.824 0.191 0.008007 0.762 0.2959 131 121 The MS-FLUKSS program does not give as good density and velocity correlations in the IPS tomo=graphy as does ENLIL. However, the magnetic fields have almost no offset, like ENLIL, and thus the magnetic field correlations are better and in some cases exceed those of the kinematic model. The main differenc is that unlike ENLIL, for MS-FLUKSS there are values of the density forced below zero, and I have a suspicion that this is what causes the large spikes in the velocity and density time series and thus the bad correlations of these values with in-situ measurements in the analysis. I produced a rather Draconian fix to keep densities from going negative but limiting the positive values of DFACT and smoothing them. The maximum values of DFACT are now set to 3.0. No limits are placed on the negative DFACT values. This rids the problem of the negative densities, and the initial tests of a few iterations showed these gave better correlations. Thus, with Tae, I now am attempting a new set of iterations The values of these with DFACT set to a maximum of 3.0, give: # iteration Density Velocity Br Bt it-18 End V it-18 End D -0 D val -0 V val Iteration 0 (kinematic run) r=0.900 r=0.951 0.739 0.846 0.135 0.004516 0.638 0.2380 0 0 Iteration 1 (MS-FLUKSS 3-D MHD) r=0.357 r=0.359 0.750 0.758 0.173 0.007254 0.704 0.2544 0 0 Iteration 2 (MS-FLUKSS 3-D MHD) r=0.359 r=0.478 0.819 0.825 0.181 0.007567 0.747 0.2905 0 0 Iteration 3 (MS-FLUKSS 3-D MHD) r=0.616 r=0.443 0.860 0.850 0.202 0.007563 0.751 0.2710 0 0 Iteration 4 (MS-FLUKSS 3-D MHHD) r=0.445 r=0.329 0.832 0.807 0.196 0.006444 0.788 0.3017 157 157 Iteration 5 (MS-FLUKSS 3-D MHD) r=0.166 r=0.593 0.798 0.736 0.198 0.008179 0.713 0.2489 302 302 Iteration 6 (MS-FLUKSS 3-D MHD) r=0.315 r=0.690 0.804 0.848 0.171 0.006977 0.773 0.3029 182 182 Iteration 7 (MS-FLUKSS 3-D MHD) r=0.580 r=0.446 0.750 0.764 0.202 0.007381 0.762 0.2936 220 220 Iteration 8 (MS-FLUKSS 3-D MHD) r=0.482 r=0.530 0.759 0.822 0.195 0.008115 0.728 0.2647 103 75 Iteration 9 (MS-FLUKSS 3-D MHD) r=0.139 r=0.542 0.795 0.824 0.191 0.008007 0.762 0.2959 131 121 On 04/30/2018 Hsiu-Shan and I learned that the IDL program that provides magnetic field data no longer completely fills the the last few forecast data set times following the current time. One fix to this is to compleatly fill the magnetic filed files at the base of the tomography program. On 05/02/2018 I produced a new main tomography program: ipstd_20n_inp_mag3_v18.f and the associated Makefile change in order to attempt to fix the forecast system to provide filled magnetic field data following the current time. The ipstd_20n_inp_mag3_v18.f program compiles without any changes being made. On 05/03-04/2018 I worked on and provided a fix to the tomography program to fill in the fields forecast at the source surface with the last given field available. This was tested and subsequently used in version 18 to provide source surface fields for forecasts on the UCSD high resolution Web pages. On 05/07/2018 a small error that listed the files filled with gridsphere was checked and fixed. It was learned that actually NO files at the end of the forecast program were filled in the test case used, and that all files had been filled with the subroutines FillWholeT.f and FillMaptN.f and that the call later to the subroutine GridSphere2D.f following was never used. The small changes to the main program to indicate this, was tested and found to work. The fix to the tomography program to fill in the fields forecast at the source surface with the last given field available was extended to the Radial, Tangential, and Normal closed fields if this option is used in forecast. This latter was not tested, however. On about 05/08/2018 I transferred all of these changes over to the ipstd_20n_inp_mag3_v18_mhd.f program. On 05/18/2018 Hsiu-Shan and I discovered that the ENLIL tomography was not giving very similar boundary outputs to those from MS-FLUKSS. This had been known earlier by several weeks, but on this day we discovered that the e3 files produced even in the ENLIL kinematic mode wewre not produced at even 6 hour intervals beginning with 3 UT hours. I discovered that the variable NETF set in the ENLIL runs to limit the number of output files to 47 caused this. For sure this is wrong for production of the e3_ files, so I have now removed this option. On 05/18/2018 I found that the analysis for ENLIL when the tomography had a boundary at 0.103... still did not agree with that from MS-FLUKSS. Thus thanks to Hsiu-Shan I decided to run the analysis for ENLIL at 0.1, to be compatable with MS-FLUKSS and to try this. On 05/18/2018 I was able to show that the boundaries from ENLII taken out at 0.103... and MS-FLUKSS taken out at 0.1 were almost identical. Also the e3 files for both MS-FLUKSS and ENLIL are now itentical on iteration-0. The e3 files from both ENLIL and MS-FLUKSS also now are produced at even 6 hour itervals beginning with 3 UT.