smeiipstd ucsd=SANIPS,$dat/ucsd/ helios1=a77b4_073_"(4)".hos,$dat/helios1/ helios2=b77b4_073_"(4)".hos,$dat/helios2/ ./smeiipstd ucsd=SANIPS,./ helios1=a77b4_073_"(4)".hos,./ helios2=b77b4_073_"(4)".hos,./ ./smeiipstd nagoya=nagoya,/home/bjackson/dat/nagoya/yearly smei=points_2003_"(20)",/home/bjackson/dat/smei/may_2003 works 04/18/07 ./smeiipstd0n_inpV20_intel smei=$DAT/smei/low_res/*.* nagoya=nagoya,,yearly ace=$DAT/insitu/acesw_[4].hravg ./smeiipstd0nhr_inpv20_intel smei=$DAT/smei/hi_res/*.* nagoya=nagoya,,yearly ace=$DAT/insitu/acesw_[4].hravg ./smeiipstd0n_inpv20_intel smei=$DAT/smei/low_res/*.* nagoya=nagoya,,yearly ace=$DAT/insitu/acesw_[4].hravg >&1 | tee test1.txt ~/soft/for/smeiipstd0n_in-situ/v20/v20vh nso_gorr=/home/bjackson/dat/gong/hcss/nso_gong[4]_[3].fts nso_gorr=$DAT/map/gong/hcss/nso_gong[4]_[3].fts nso_sorr=$DAT/map/nso_noaa/hcss/nso_noaa[4]_[3].fts gets past the magnetic file read error but dies Revisions: On the week of 03/08/01 I began modifying ipsdtestR using mk_ipsdtestR. Stuff before: On about 11/15/00 I discovered an error in the the way scratch arrays were zeroed. I modified the IPSDTDV2020NN.f program to use the above programs with scratch arrays placed in the calling sequence. When I did this I noticed that the scratch arrays originally used were not dimensioned properly for the 2020 program. The scratch arrays zeroed at the beginning of the program were not fully zeroed, and this was an error. When this was done properly,the 2020 program no longer converged with few holes - there were a lot of holes in both velocity and density and the results that I had come to like and agree with were no longer correct. I presume that the increased number of line of sight crossings from non-zeroed arrays were partially responsible for the previous OK results (That always seemed to be consistantly the same for any given Carrington map.) I consequently got the idea to filter the weights and the number of line of sight crossings in both time and space so that these are more consistent for only a few lines of sight in a way similar to the answers. Thus the weights and the numbers of line crossings are now smoothed spatially and temporally with the same filters as the answers. This seems to work wonderfully, and allows the line of sight crossing numbers to be placed lower than before - to as low as 1.5. At 1.5 most of the Carrington map is filled and the program proceeds to convergence (on Carrington Map 1965) pretty well. At 11/16/00, the comparisons with in-situ work almost as well as before, but time will tell. In an additional change on 11/15/00, I modified mkvmaptdN.for and MKDMAPTDN.for to increase the effective number of line of sight crossings by sqrt(1/cos(latitude)) This allows the acceptance of a lower number of line of sight crossings near the poles since the spatial filtering is greater there. This also seems to work. On 11/15/00 I also modified fillmaptN.for and copyvtovdN.for to accept scratch arrays through the calling sequence. IPSDTDV2020NN.f also needs to be modified to accept these subroutines with their new calling sequences. The convergence with the above modifications (11/16/00) seemed pretty ragged until sources were thrown out after which the convergence proceeded in a very stable fashion. The peak in density at 7/15 was absent in the density data, and this may be because of thrown sources. On 11/16/00 I then also smoothed FIXM spatially and temporally in both mkvmaptdN.for and MKDMAPTDN.for. This also converged in a ragged fashon even after the thrown sources and in the end did not write the 3 D arrays I asked - maybe memory was clobbered. The program did converge. however. The in-situ time series was not at all like the model!!! On 11/16/00 I then changed back to earlier convergence scheme where the FIXM is not smoothed. The 3 D arrays still are not being written out. On 11/16/00 I noticed that the mkvmodeltd.for routine was not as on the vax [.ipsd2020.normn}subdirectory, but was an older version. I re-wrote the newer version (which does not need a scratch file) and replaced mkvmodeltd.for with the newer version. I also checked to see that the MKGMODELTDN.for subroutine was unchanged from the vax and I revised it so that it passes two scratch arrays through its calling sequence. The newer version with iterates identically to the old version. There seems to be no change in the write3b status throughout the iterations indicating that nothing gets clobbered during the iterations. On 11/21/00 I fixed the fillwholet.for subroutine so that now there is now a scratch array is passed through its calling sequence so that the above error noticed on 11/16/00 was fixed. Things seem pretty much the same when I do this, and now the t3d files are output. On 11/21/00 I stopped using distributed weights, since the program NOT using distributed weights seems to converge as well as the program version that does use distributed weights. The EA answers are somewhat different, but not much. On 12/5/00 I found an error in the write3dinfotd1N.for subroutine in forecast mode. The subroutine seems to give two 3d files that are incorrectly labeled (and interpolated) because AnewI is wrong. I believe AnewI is wrong because the input N values are wrong in the forecast mode. The problem was in the use of XCintF where there was not an N+1 copy of the XCintGG array into the XCintF one in the main program. In the forecast mode this caused the bomb. However, I see that whatever reason that there was a forecast mode for the write3dinfotd1N.for routine, it does not exist any more. I have therefore eliminated the forecast mode for write3dinfotd1N.for in both the main program and in the subroutine so that now all that is needed is a single call to write3dinfotd1N.for. On 12/7/00 I found an error in the MKTIMESVD.for routine in that the NmidHR value was subtracted from the time to start the beginning day. In the main program, this value was labeled number of hours "before" midnight and was -10 for Nagoya. The main program now reads number of hours "from" midnight and is now -10 as before for Nagoya. UCSD is +8 or +9, as you know, depending on daylight savings time. The MKTIMESVD.for subroutine now adds this value to begin the day at midnight. This has the effect of changing all the extensions of the t3d files, since the temporal intervals now begin at a different time - midnight at Nagoya. If the t3d files are interpolated by 1 in write3dinfo1N.for, this now has the effect of dividing the day into times before midday in Nagoya and after midday in Nagoya. If the files are interpolated by 2 in write3dinfo1N.for, then the t3d files are divided (approximately) into midnight to 8am, 8am to 4pm and 4pm to midnight. The extension values have been checked by PL HE Pandora on the vax. On 12/7/00 I found that the forecast 51874 run, which terminates data at this time, or 12 UT on November 25 (355 sources found), gives the last matrix at 1970.064 (centered at 1970.0825 or 2 UT November 26). The forecast run at 51874.5 (0 UT November 26) (371 sources found) gives the last matrix at 1970.064 as well. Since this does not give even a one day 3d matrix advance forecast, I have now changed the value of NTV and NTG by one more so that the 3d matrix is written to a file for at least one day beyond the current time. On 12/7/00 I found that in the forecast runs there were as many sources used as there were source values within the NLG and NLV increments set by XCintG and XcintV. I fixed this in the main program so that now all the data that is used in the forecast mode comes from times that are less than the Carrington rotation of the Earth at the time given input as the forecast time. The current mk_ipsd2020NN uses the IPSDTDV2020NN.f main program. On 01/30/01 I copied all the programs of the mk_ipsd2020NN compilation over to the for/IPSd2020NN subdirectory so that this program and subroutine is now complete and seperate from the other fortran programs. On 01/30/01 I also began to alter the FIXMODELTDN.for and mkvmodeltd.for subroutines so that they better reproduce in-situ velocities. I have done this by copying all the files to the for/IPSd2020 subdirectory so that this program and subroutine is now complete and seperate from the other fortran programs. I renamed the IPSDTDV2020NN.f program to IPSD2020.for When I ran the program for 1965 in this subdirectory, the results for 16., 16., 0.25, 0.25 and 0.40 for the EA_FILE idl run_compareaceLp run was 0.771 and 0.066 and this is slightly different from before which was 0.688 and 0.058. I don't know why there is this slight difference. On 01/30/01 I began a lot of changes in mkvmodeltd.for in order to get better velocities into the matrix. I think the original (or perhaps the model 01/31/01B) is the correct one for weighting, but I tried a lot of things to see if there could be an improvement in the in-situ comparison. There was not a whole lot of difference when using the things I tried, below. C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! Original C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I)*VSN ! Added B. Jackson 01/30/01 C VWT = VWT + VWTij(J,I) C VPERP = VPERP + VWTij(J,I)*VELO C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I)*VSN ! Old run long ago. C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01A C VWT = VWT + VWTij(J,I)*VSN C VWTi = VWTi + VWTij(J,I)*VSN C VWTij(J,I) = VWTij(J,I)*VSN C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01B Seemed ~best so far C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I)*VSN*VELO C VWTij(J,I) = VWTij(J,I)*VSN*VELO C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 02/01/01A C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I)/VSN C VWTij(J,I) = VWTij(J,I)/VSN C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 02/01/01B (Original) C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I) C VPERP = VPERP + SQRT(VWTij(J,I))*VSN*VELO ! 02/01/01C C VWT = VWT + SQRT(VWTij(J,I)) C VWTi = VWTi + SQRT(VWTij(J,I))*VSN*VELO C VWTij(J,I) = SQRT(VWTij(J,I))*VSN*VELO VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01B Seemed ~best so far VWT = VWT + VWTij(J,I) VWTi = VWTi + VWTij(J,I)*VSN*VELO VWTij(J,I) = VWTij(J,I)*VSN*VELO C VPERP = VPERP + (VWTij(J,I)**2)*VSN*VELO ! 02/02/01A C VWT = VWT + (VWTij(J,I)**2) C VWTi = VWTi + (VWTij(J,I)**2)*VSN*VELO C VWTij(J,I) = (VWTij(J,I)**2)*VSN*VELO VW = VWTij(J,I)*VSN*VELO ! 01/31/01B rewritten VPERP = VPERP + VW VWT = VWT + VWTij(J,I) VWTi = VWTi + VW VWTij(J,I) = VW Thus, I will settle on the version above. All other versions of the program should incoporate this weighting which essentially places all the line of sight variations into the weight. The nominal 16., 16., .65, .65, .25, .25, .4, run of 1965 gives 0.647546 and 0.229140 for the density and velocity correlations for the restricted data set and ACE in-situ measurements around the time of the July 14 CME peak. Other combinations of parameters give higher correlations, but none give the same density values in-situ/model with 16. and .65 as do these. The run of velocity deconvolution alone (2/12/01) does not allow the parameters to be set. This is now fixed in the main program (2/12/01). The version of the program that deconvolves velocity alone (both constant density and that uses mv^2 = constant) bombs in gridsphere with bad VMAP values before any iterations are gotten to. I have now fixed this and also fixed the problem when G-level alone is selected. The problem was in the setup for each initial velocity or density array. On 2/14/01 the velocity mv^2 works. On 2/14/01 the density mv^2 does NOT work to give velocity. Thus, I had better fix the Mk_D2V subroutine. On 2/15/01 I re-did a lot of the program to accomodate the modes that use a constant velocity and density and the mv^2 assumptions plus a write-out of the DMAPHOLE and VMAPHOLE data. These work now and have been checked using the routine to converge on both density and velocity, on velocity using constant density and on velocity using mv^2. The latest runs give the nominal using 16., 16., .65, .65, .25, .25, .4, for 1965 of 0.666897 and 0.417346. I think the "better" correlations may happen because fillwholeT is no longer double, but I am not sure of this. The correlations look considerably different now, too. Thus, to check I redefined using fillwholeT as before, and when I did this the nominal for 1965 looked the same as it has in the past in the iterative phase at least up to iteration 14 or so where the two began to diverge ever so slightly in the percentages, only. The correlations were: 0.647546 and 0.229140 which were identical to before. I thus returned the fillwholeT routines so that the times are no longer double-smoothed and I began a search for more "nominal" parameters. I also got a version of MK_DTV.FOR from the CASS01 fortran files and I renamed it MK_DTVN.for and compiled this version in the IPSd2020 subdirectory. The new version now works for mv^2 = constant for density iterations and gives approximately correct velocities. The new version also works for mv^2 = constant for velocity iterations and gives approximately correct densities. Oh, yes, another innovation is that the mv^2 = constant is now determined for each iteration with no convergence problems as far as I can tell. I now also determine the equatorial density and velocity (subroutines Deneq3.for and Veleq3.for so that the mv^2 = constant is now correctly applied to the other nominal value. In other words, the average speed at the solar equator is used to determine the mv^2 constant to produce density when speed is modeled. Likewise, the average density at the solar equator is used to determine the mv^2 constant when density is modeled. The nominal 16., 16., .65, .65, .25, .25, .4, for 1965 gives 0.666897 and 0.417346. Velocity convergence: Constant density and the nominal for 1965 above gives -0.123838 and 0.0811669 (The density above looks in the correct range, but it just is not constant. This is because the Mkshiftd rountine allows non-constant interactions to build up at height as long as velocities are non-constant.) Also, mv^2 density and the nominal for 1965 gives: -0.0242271 and 0.115706. End eq. vel = 334.436707 Density convergence: Constant velocity and the nominal for 1965 above gives 0.347806 and NaN Also, mv^2 velocity and the nominal for 1965 above gives 0.347806 and 0.318303. End eq. den = ? (Something goes wrong when this last is done in that the t3d file doesn't seem to write out OK. Also, it isn't clear why the correlation with velocity = constant and velocity with density given by mv^2 is does not give a different density correlation. Seems to me that it should.) On 2/16/01 I think I found the error at least in the last problem. The DMAP is not being updated each time the mv^2 is done. DMAP is consistently being held constant. Also VMAP is the same. Latest Revisions for the ipsdtestR program On 03/04/01 I ran the latest test program with spatial resolutions three time normal The test rotation was 1884. The run lasted about 1 day. Other things were running. On 03/08/01 the run ended where times and spatial resolutions were set to 2 times normal. The run lasted a day and a half for rotation 1884. On 03/07/01 I also ran a few tests with the program in 10 degree mode, and I updated the parameter list to make the changed resolution automatic with only a few changes in the parameter input. I also fixed the program so that it can (I hope) run with two different temporal resolutions - one for density and the other for velocity. In addition these new spatial resolutions now allow different line-of sight resolutions commensurate with the new spatial resolutions for either velocity or G-level. I will try this version of ipsdtestR today, 03/08/01. Later - The program runs, but the density fixmodel routine returns a "nan", as do all the other outputs of that subroutine. Somehow, the problem of a "nan" went away in the later compiled versions of the program, and so now on 03/09/01 I have a "working - so far" program using these new resolutions. The 10 deg. version in both V and g-level curently takes: g-level sources: 3851 (CR 1884) V sources: 703 memory 1x resolution in S & T V: 57,388 1x resolution in S & T g: 1x resolution in T V: 360,488 2x resolution in S V: 2x resolution in S & T g: g-level sources: 9265 (CR 1858) V sources: 652 1x resolution in S & T V: 69,756 1x resolution in S & T g: 1x resolution in T V: 388,396 2x resolution in S V: 2x resolution in S & T g: These tests imply that 3 degrees resolution with 6 Megs of sources (200 x 30,000) held in memory will take: 1.2 Gigs of ram per the program 55.6 Gigs of ram for the sources ~150 hr for 0.01 Megs of sources with a 850 MegHz machine At ONE degree resolution with 18 megs of sources (600 x 30,000) held in memory: 11.0 Gigs of ram per the program 678.0 Gigs of ram for the sources ~4000 hr for 0.01 Megs of sources with a 850 MegHz machine On 3/13/01 I began timing experiments. More experiments will continue when the new 1.2 GigHtz and 1.5 Gigs ram machine begins working. On 3/14/01 I wrote a subroutine writeM_Scomp.for that prints out files used to check sources if the user asks. For Cambridge data where no source names are available this program uses the sky location declination and the sky distance from the sun to give a unique source identifier for each 5-degree sky location. On 3/14/01 I also wrote a subroutine iwriteproxymapN.for that is the compliment of ireadproxymapN.for This subroutine writes out maps of the final source surface data to be used in the program as input on the first run of the program in order to help iteration convergence. I also modified ireadproxymapN.for so that now fillmap.for is no longer needed. I also removed a good many other unneeded lines of code that have previousluy been unnecessary. On 3/15/01 I modified the main program so that it no longer needs (but can if wanted) use two shifts on each iteration. The modification uses a shift only after the velocity is determined (a density-based shift) to set up for the density iteration. To do this, the velocity times and settings need to be the same as the density ones, and they are set this way if the user does not remember to set them this way. On 3/15/01, I checked the ipsdtestR.for program with the old version of the ipsd2020 program in the subdirectory IPSd2020 and found that they gave identical answers. I then renamed the IPSDtestR.for program to IPSTD.for and removed the IPSDtest.for program from the IPSdtest directory. On 3/15/01 I found that the two programs did not give identical answers and so I found the error in that the variables in the new program aNdayV and aNdayG were not typed real in the two subroutines MKGMAPTDN.for and mkvmaptdN.for and after these two problems were fixed (in the IPSHTD and IPSd1010 directories as well) the two versions gave identical outputs up until the t3dwrite for CR 1965 in both the new and old version of the program. The answers are still slightly different. The correlation for the limited time series is now 0.850 rather than 0.840. I do not understand this difference, but it is so slight that I plan to ignore it. On 3/19/01 I moved this program over to CASS183. This version of ipsdtestR.for has better inputs and outputs for files. I modified this program and mk_ipsdtestR so that it is now called simply ipsdtest on CASS183. There are now three versions of the ipsdtest program in this subdirectory. They are IPSDtest.for IPSD1010.for and ipsd2020.for and their associated mk_ipsd files, and these should be used to change and test new routines and ideas. ************************************************************************************************** The version of ipshtd on 3/30/01 now seems to work on CASS183. I am testing the two parameters in the current solar wind model that can be controlled specifically by the Helios data to change the model density. These parameters are the Base density and the density falloff with height. The current ipshtd version uses the following mk_ipshtd file: f77 -w -fdollar-ok -ffixed-line-length-none -I$for/h -I$for/h/linux ipshtd.for READVIPSN.for read_hosf.for FIXMODELTDN.for MK_D2VN.for timesmooth.for fillmaptN.for fillmapL.for fillwholet.for MKTIMESVD.for MKDMAPTDN.for MKGMODELTDN.for mkpostd.for mkshiftd.for mkvmaptdN.for mkvmodeltd.for mkvobstd.for get4dval.for get3dtval.for copyvtovdN.for MkDHModeltd.for Mk_Base.for EXTRACTD.for writeM_Scomp.for IREADPROXYMAPN.for iwriteproxymapN.for write3dinfotd1N.for deneq3.for veleq3.for -L$for -lgen -o$myfor/IPSHTD/ipshtd The original working program for ipsdtest was modified beginning on 3/19/01 to use helios photometer data. Several subroutines have been adapted or modified to use helios data, and the current program should incorporate all that the ipsdtest program could do with the IPS data as well as run Thomson scattering data. These subroutines include: ipsdtest.for renamed to ipshtd.for, now incorporates new subroutines: read_hosf.for MkDHModeltd.for Mk_Base.for These new subroutines allow a switch to use Thomson scattering data from Helios in the tomography program. These switches are input by the user, and include new setup calls to extract Helios 1 and Helios 2 photometer time series as well as the ips UCSD, Cambridge, Nagoya or Ooty data. Besides the ability to read the Helios data, the current main program changes from ipsdtest.for includes a section that sorts the Helios 1 and Helios 2 data in chronologic order, and a section that deals with the currently non-working initalization of the density maps if these are not read in from the /dat/proxy/ sub-directory. The iterative part of the program is pretty much as before with a small section added to incorporate the base density in the Thomson scattering data, and a new FALLOFF parameter that decreases the data density derived by the Thomson scattering above and beyond the mkshiftd model. The use of a base density meant that two new map arrays needed to be created in the main program, one DBASE, and another DMAPMB, which is the density map minus the base for use in the Thomson scattering tomography. The psychology behind this program is far simplier than the Vax version since once DMAPMB is produced by the tomography it is immediately added to DBASE to provide DMAP which is manipulated as in the previous versions of the ips tomography programs including ipsdtest.for. That a FALLOFF parameter is used implies that the velocity increases in the solar wind in the region of the Helios observations and between Helios and Earth. This is NOT currently taken into account in the UCSD IPS velocity data modeling. Because the UCSD IPSD data is currently used and 12 days combined to allow UCSD data to be "time-dependent", the calls to copyvtovdN.for in the program now incorporate aNdayV multiplying CONSTV, and aNdayG multiplying CONSTG. This should be incorporated in other programs, especially ipsd1010 as well - whenever there are times combined (aNday constants not equal to 1). At the end of the main program, the call to copyvtovdN.for now uses mode=1 (unlike the mode=2 call in the other versions of the program). This mode currently limits the data combined at the new data temporal cadence to be cut off at times 1.5 intervals from the current time. This should probably be changed in the program copyvtovdN.for so that the cutoff is at times 1.5 intervals from the current time for the input (rather than output) maps. read_hosf.for, which was originally read_hos0.for, now reads the Helios 15 30 and 90 degree data and also outputs the photometer and sector identifiers. These changes took the most work and included help from Paul Hick to adapt the routines read_hosf.for calls to read the vax-produced binary files edited by the vax SHPLT. MkDHModeltd.for is similar to the one used in the vax program, but a lot of work went into figuring the best weights for Thomson scattering. The scheme to make the model brightnesses was not changed, although this was considered and the weights provided by the MkLOSWeights subroutine was not thoroughly researched (yet). The MkLOSWeights subroutine should now include a pwr parameter in its calling sequence in the library that is not used in the Thomson-scattering weights, but only in the IPS weights. The MkDHModeltd.for subroutine is considerably different from the Vax version. Finally settled on so far (3/30/01) is a brightness given by: DTW = DENj*WTS2(J,I) AM = AM + DTW and weights given by: GWTij(J,I) = DTW The above scheme is very simple, and fixes to the initial density are simply linear ratios. The values of DENj are given using a FALLOFF parameter. Other versions of weighting were tried and included: C The way it was: C GWTij(J,I) = WTS2(J,I) C GWTij(J,I) = GWTij(J,I)*cosd(XLAT(J,I)) C DTW = DENj*WTS2(J,I) C DTW = DTW*cosd(XLAT(J,I)) C GWTj = GWTj + GWTij(J,I) C AM = AM + DTW C New DTW = DENj*WTS2(J,I) AM = AM + DTW C GWTij(J,I) = DENjB*DENj*WTS2(J,I) C GWTij(J,I) = DENjB C GWTij(J,I) = DENjB*DENj GWTij(J,I) = DTW WTj = WTj + WTS2(J,I) C if (I.eq.2000) print *, 'I, J, DENj, WTS2(J,I), AM', I, J, DENj, WTS2(J,I), AM end do GM(i) = AM do j=1,NLOS C GWTij(j,i) = GWTij(j,i)/GWTj ! makes every line of sight the same weight (the way it was) C GWTij(j,i) = GWTij(j,i)/WTj/CONSTWT ! make every line of sight weight according to base density ! and relative line of sight weight (session 4)(DENjB*WTS2(J,I)/Wtj) C GWTij(j,i) = GWTij(j,i)/CONSTWT ! base density times the l.o.s. weight at that position ! (session 5) (DENjB*WTS2(J,I)) C GWTij(j,i) = GWTij(j,i)/(CONSTWT*1000.) ! base density times the l.o.s. weight including density ! at that position (session 10) (DENjB*DENj*WTS2(J,I)) C GWTij(j,i) = GWTij(j,i)*AM/(CONSTWT*1000.) ! base density times the total l.o.s. weight including density ! in that direction (session 11) (DENjB*AM) C GWTij(j,i) = GWTij(j,i)/(CONSTWT*1000.) ! base density times the deconvolved density at that position ! (session 12) (DENjB*DENj) GWTij(j,i) = GWTij(j,i) ! the deconvolved density times the L.O.S. weight at that position ! (session 13) (DTW) (because the base density ratio fix is relative to 1) end do end do C C C Mk_Base.for was revised from a version on the vax called MK_VTBTD.FOR. This new version is considerably different from the Vax version and far simplier. Mk_Base.for now (03/30/01) simply divides the original DEN1AU by 3. and uses this as the base density projected to the source surface using the FALLOFF parameter. DEN1AU is a parameter that can be input at the beginning of the main program. No change is currently made in DEN1AU, although this was tried, and the program converged fine. The Mk_Base.for subroutine does output a lot of information though such as the polar and ecliptic density and velocity at both the source surface and 1 AU as well as the base density at both these locations. mkpostd.for in has a new input parameter added which is the time for each line of sight observation in Earth carrington time. This value is the same as the Earth Carrington location in the IPS data, but is different for the Helios data from the Helios Carrington location. mkpostd.for is the same in this respect as the Vax version. The Earth Carrington time for the Helios data is calculated in the main ipshtd.for program immediately after the calls to read_hosf.for as in the Vax program. Should STEREO or some other remote sensing instrument non Earth-based use this program, this change will need to be incorporated for these data. FIXMODELTDN.for was modified to include a new mode parameter that switches to Thomson scattering input. The mode switch in the FIXMODELTDN.for routine now does little but change the printout of the text different from the velocity fix mode. MKDMAPTDN.for was also modified slightly to use a smoothed gridsphere to limit what is and isn't included in the crossed component limit (the gridsphere mode in /IPSHTD/MKDMAPTDN.for is now 3 rather than 4 in the versions of MKDMAPTDN.for in other versions of the main program). mkvmaptdN.for was also modified slightly to use a smoothed gridsphere to limit what is and isn't included in the crossed component limit (the gridsphere mode in /IPSHTD/mkvmaptdN.for is now 3 rather than 4 in the versions of mkvmaptdN.for in other versions of the main program). I am now (3/30/01) running scripts to attempt to determine the best overall values of DEN1AU and FALLOFF to use in the tomography program for different spacecraft and time intervals. I suspect that CMEs decelerate with solar distance while the background solar wind accelerates, and so these two parameters may change depending on the temporal intervals of the fit, and maybe with solar cycle. Also, these will depend on the weighting scheme used. Also, I may want to give more weight to some photometer data - the 90 degree data, for instance. On 4/14/01 I realized that the density fixes should go from ZERO DENSITY. This means that the densities iterated should be the total densities and the ratio of fixes should include the base density. To fix the original program, I removed the DMAPMB densities and instead included the base densities and observed brightnesses into the MkDHModeltdN.for subroutine. The base density brightnesses are now added to the observed brightnesses in the MkDHModeltdN.for subroutine and output from this subroutine to the FixModeltdn.for subroutine. On 4/15/01 I modified this so that now the base density brightnesses for each source are only output from the MkDHModeltdN.for subroutine. I now add the base brightnesses to the observed brightnesses for each source, and I input these brightnesses to the FixModeltdn.for subroutine. This allows the source write routine writeM_Scomp.for to use the model brightnesses to be input with the base brightnesses subtracted from each source in order that these can be compared properly with the observed brightnesses that are gotten from the Thomson scattering. All this assumes the writeM_Scomp.for write routine follows the subtraction which follows the FixModeltdn.for routine which follows the MkDHModeltdN.for subroutine. The above all seems to work and be consistent. It also seems to give answers not unlike the answers that were given using the prior technique. On 4/15/01, I fixed the Mk_Base.for subroutine so that it now outputs proper values. It did not do this following the changes on 4/14/01. The run 4/21/01 to find best parameters for the 10x10 matrix settled on parameters 9.0, 1.0, 8.5, 2.1 for the Helios data and used the default for the velocity for CR 1653 and Helios 2 data. The correlations are 0.859 for density for Helios 2 and 0.744 for UCSD IPS velocity for this rotation. This is considerably better than before for this rotation (Was about 0.762 for the best H2 fit before). The IMP fits still look pretty good, too, and were almost as good as when they were actually fit to EA densities with a higher spatial filter value. Thus, the correction made on 4/14/01 surely helped and worked well. On 4/21/01 I changed the ending of the program so that it now outputs two types of density matrix versions using Paul's new write3d_info routines. and I have run this to make 3 D density matricies to show. For the 1653 rotation: %ipshtd-I-Info, D observations 13667 (Both Helios 1 and 2) %ipshtd-I-Info, V observations 108 (UCSD IPS) On 4.21/01 I also installed more Helios 1 and 2 files into the CASS183 Helios1 and Helios2 directories on soft/dat/ These files will allow me to do tomography at least on the Helios 2 file B79V4_110_0056.hos. To do this I will need to evoke the ipshtd program to work by typing $myfor/IPSHTD/ipshtd ucsd=SANIPS,$dat/ucsd/ helios2=B79V4_110_"(4)".hos,$dat/helios2/ (CR 1681) The rotation is hard-wired into the program currently and needs to be changed from 30 to 56 as: XCbegHOS2 = 56 Also since the Helios 2 data is V-light I should change the parameter before the read program to V light for the helios read routine routine. LT2 = 3 (These two things need to be changed back when the program is run with 1977 data.) On 4/21/01 this version of the program gets: %ipshtd-I-Info, D observations 7539 %ipshtd-I-Info, V observations 151 and seems to converge. Thus I have begun a script for this program to settle on the best parameters for this rotation using a new version of the program named ipshtd79 from ipshdt79.for On 4/21/01 I installed the modifications for ipsd2020.for into the program to output nv3 files. On 4/22/01 I noticed and proved to myself that my installations and these modifications significantly altered the velocity data and somewhat altered the density data as noted by the correlations. It turns out that the velocity data is not very complete and the modifications in the ipsd2020.for program make it more complete and these lower the fit correlations in rotation 1653. They also are incorrect for Helios data since Helios is not at the location of the Earth so that Fillmapols.for can use the time center well. I thus modified the ipshtd.for and ipshtd79 programs to work without the flllmapols call, and have run the program again. The current version for rotation 1653 gives correlations of 0.863 and 0.590 for density and velocity respectively. The convergence for this revised version of the program is of course exactly the same as before, and I confirmed this for the program throughout the run with the same parameter run in temp/ipshtd/H2. On 4/26/01 I modified the write3d_infotd.for subroutine to write3d_infotdh.for so that it works with the ipshtd79.for program. I modified this routine and the ipshtd79.for program so that there is an extra input in the routine that allows a Helios region of interest to be input so that the matrix can be shown properly by the imaging IDL programs. This modification works and seems to work well. There will always be a problem with this when two Helios spacecraft are used to deconvolve the heliosphere since the two spacecraft are not in the same location. There is an even further fix - namely to treat the northern and southern hemispheres differently in the programming before the matrix is output. This will take modifications to several subroutines that are pole and hemisphere specific. The only real subroutine like this though is the fillmapols.for subroutine that deals specifically with the region of interest. You would also need to decide which spacecraft (or Earth) to center on when you output the final density matrix. On 1/4/02 I set up the mk_ipshtd77 program to work on Tamsen's directory. The ipshtd77 program can be compiled using the reqular g77 Fortran compiler by running the make file: ./mk_ipshtd77 The fortran program can be run using the sequence: $myfor/TIPSHTD/ipshtd77 ucsd=SANIPS,$dat/ucsd/ helios1=A77B4_318_"(4)".R08,$HOME/dat/Helios/ helios2=B77B4_318_"(4)".R08,$HOME/dat/Helios/ Use 1661.5 as the rotation number in order to have the period of November 20 - 29, 1977 fully analyzed. Other default parameters are probably OK. On about 2/1/02 Tamsen got her subroutines to provide magnetic field data for the velocity fields provided by the program. On 2/18/02 I modified the main program ipshtd77.for to now give the correct observed brightness to the file gts_1661_18 On 2/20/02 I found that the Helios data sort routine in the main program ipshtd77 was leaving a small portion of the data out of the analysis, and this is now fixed. On 2/26/02 I found that the read_hosf.for program was blanking out the galactic center. I have now disabled this feature in the read_hosf.for subroutine. This adds perhaps 3000 more lines of sight to the tomography for this 1977 analysis. On ~3/22/02 I realized that the EXTRACTD routine was not using the FALLOFF parameter to extract the density from the base map. I fixed this on 4/1/02, and now the fits to the in-situ data at Earth and at Helios are far more consistent. The correlations do not change very much at all for the different time series, and I guess this makes sense, since only the scale of the change is affected (slightly different over the whole time) but not the change itself. Thus, extractd and its subroutine call is different now in this and the ips2020 version of the program. The transfered version of the program gave identical answers on CASS185 when run on CASS185 on 4/3/02. To give this identical answer Paul need to change a double precision rotation parameter back to single precision on CASS185. Since I expect that the parameter is correct in double precision and not in single I expect that the earlier run on 4/2/02 that was close but not identical was the more correct version. On 4/4/02 the transfered version of the program (that previously compiled without errors or warnings) now compiles on CASS185 with a warning about get2dval and get3dval being a duplicate routine. This does not give me a good feeling about the current version of the program on CASS185. On 4/22/02 I began the process of making the ipshtd77 program accept a three-component xcshift parameter that includes latitude as well as time shifts. This involves modifications to the main program ipshtd77.for and mkshiftdn.for mkpostd.for extractd.for write3dinfotd.for mkveltd.for get4Dval.for These were modified and checked and gave the same answers when no modification was in place. On 5/20/02 a value of Rconst was added to the parameter outputs of mktimesvd. This constant is then added to the inputs of mkshiftdn.for extractd.for to make them work in an accurate form. On 5/20/02 I modified mkpostd.for to incorporate the complete latitude change. This also takes a modification in the main program to incorporate this projected latitude. In addition, modifications need to be made to the calls to include this projected latitude (XLAT-->XLproj) Get4dval MkVmodeltd MkGModeltdn MkDHModeltd This is done to incorporate the new projected values of latitude that are already handled correctly in these routines. On this same date I modified the routines write3dinputd.for extractd.for to include the projected values of latitude correctly as well. On 5/21/02 the version of the program that was transferred to CASS185 is the same as the version on both the /transfer and TIPSHTD subdirectories with the exception that the transfer version does not do magnetic fields while the programs that Paul and Tamsen install are undergoing change. The version of the program on CASS183 in the main directory however, does deal with magnetic fields in the old way. Both the CASS183 and CASS185 versions give identical H1, H2 and EA files using default parameters. On 5/23/02 I discovered that I had inadvertently placed the new latitude shift in Get4Dval as a projected rather than a line of sight variable. There is no difference in current analyses, but will be in the future. This has now been changed in both the transfer and regular versions. I also checked to be sure that all the Get4Dval calls have time first in the calling seguence, since this is out of sequence from the index values. They all do. On 5/29/02 a problem with write3d_infotd.for was fixed The bottom density map array written to disk that was copied into the file to be output was being multiplied by a file that was set to near bad values. This gave a write error to disk for a non-standard file. This is now fixed. The dummy values of VM and DM that are written in the subroutine need only have two dimension, and this has now been fixed in the main program and subroutine. On 11/3/03 I copied all the TIPSHTD files to TIPSHTDnew subdirectory and began to modify the files in this directory to incorporate the changes made in the TIPSd2020 directory to use the real times used in the ipsd2020 files. On 11/4/03 the mk_ipshtd77 file has been changed to work like the mk_ipsd2020 file. All the subroutines have been revised to have extensions of *.f. and no capitol letters. The Sun$RAU have been modified to Sun__RAU. I then replaced all the subroutines: File : copyvtovdn.f *** File : extractd3d.f *** File : extractd.f *** File : fillmaptn.f *** File : get3dtval.f *** File : get4dval.f *** File : mkpostd.f *** File : mkshiftdn.f *** File : write3d_infotd3d.f File : mkvmodeltd.f File : mkgmodeltdn.f File : mkdhmodeltd.f with versions in the IPSd2020new directory and modified the mkshiftdn.f and extractd.f calling sequence. I then replaced mktimesvd.f with mktimes.f in the main program, and changed a couple outputs from the program. On 11/7/03 I got the program running with double precision source times input. The subroutines revised to do this were: ipshtd77.f mkpostd.f readvipsn8.f writem_scomp8.f readgips.f (Will need work if Cambridge data are ever read again and this was NOT done.) read_hosf8.f (The helios Doys8 is not really brought in as double precision but only converted from single) The program ipshtd77 now works. By the way, I have the adjustJDCar subroutine now working. The parameters match the time series funny, and I will now look at ipshtd78 later to see if the ones in that program are better, and make the time series more accurate. They weren't. I was running: ./ipsdhtd77 ucsd=SANIPS,./ helios1=a77b4_073_"(4)".hos,./ helios2=b77b4_073_"(4)".hos,./ with XCbegHOS1 = 30.0 XCbegHOS2 = 30.0 I will now run the program with from my PC backup files and on the Nov 77 data: ./ipsdhtd77 ucsd=SANIPS,./ helios1=a77b4_073_"(4)".ave/ helios2=b77b4_073_"(4)".ave/ and: XCbegHOS1 = 38.5 XCbegHOS2 = 38.5 This gives %ipshtd-I-Info, D observations 12431 %ipshtd-I-Info, V observations 14 And with default parameters: 6 1.2 5 2.2 On 11/10/03 I discovered a problem with the ipshtd77.f program - bonly1 was typed true, and this placed a daily digital cadence on the velocity data. As a consequence the velocity tomography did not work at the end of 1977 with only 14 sources. This is now fixed. On 11/11/03 I transfered these files to the SMEIIPSTD directory and on 11/14/03 I fixed the DOYSV array in the main program. The program on SMEIIPSTD and on the IPSHTDnew directory now give exactly the same answers using default parameters. The magnetic field pick-up does not work on this version of the program. On 11/14/03 I found that the program bombed in mktimes when no Helios sources are present but you ask anyway for density reconstruction. I now check for this in the data just before the call to mktimes. There were a couple other changes needed to use SMEI data. The automatic search routine for the read program needs to be modified. On 11/19/03 I got the program to read fictious SMEI data and to run to a conclusion. The new subroutines are: read_smei8.f smeiread.f On 11/20/03 the program that runs runs with the following command line: ./smeiipstd nagoya=nagoya,/home/bjackson/dat/nagoya/yearly smei=03_149_"(9)",/home/bjackson/dat/smei/ On 01/01/04 at the beginning of this calendar year (and part of the last) I discovered a bug in the tomography program. When times are input differently for density and velocity - i.e., densities a factor of two more highly resolved digitally - the velocity values increase to well above their correct amount in-situ. This does not occur if only the LOS weights are better resolved in density than in velocity, or if both density and velocity are better resolved spatially. Note: DMAPV is the DMAP for velocity and VMAPD is the VMAP for density. At the beginning of the year 2004, I modified several parts of the tomography program including revisions to the copyvtovdn.f and copydtodvn.f programs. These two programs had errors, and required a dummy array to be input to solve the major problem. These subroutines now work better but do not solve the problem that causes the error mentioned on 01/01/04. On 1/23/04 I modified mkvmaptdn.f so that DMAPV is no longer input in its calling sequence. This quantity was not used within the subroutine. On 7/26/04 I On 1/23/04 I modified fillwholet.f so that it does not now redundantly fill the center map. This should change the answer some since there are a some bad center maps in the whole time series. On 1/23/04 I also modified the section in mkshiftd that fills temporary maps that are bad. For maps when the number of times was odd, this section did not work if the center map was bad. This will need to be revisited because it now does not work for even total values of time. On 6/11/04 I changed subroutines mkgmodeltdn.f, mkdhmodeltd.f, and mkvmodeltd.f so that real*8 XCtbeg and XCtend are carried through the subroutine and into the Get3dtval subroutine. On 8/3/04 I have modified the mkshiftdn.f suproutine to hopefully better handle bad Vtmp maps. On 8/3/04 I also modified the mk_base.f subroutine so that it now bewtter represents the DEN1AU values input (at 2/3 rds) On 8/3/04 I also modified the mktimes program so that the numbers of times is always even. On 8/3/04 I have modified the smeiipstd.f main program so that the various constants now operate to provide accurate values depending on the approximate numbers of lines of sight in both velocity and dnesity. On 8/3/04 I have modified the smeiipstd.f main program so that the velocity 2D maps are smoothed before they are output to the EA files and to the 3D matrix. On 8/3/04 I have been using the smeiipstd.f main program so that: parameter (NLmaxG = 44000, ! Max # G data points (13000) (400,000) & NLmaxV = 2000, ! Max # V data points (2000) & NGREST = 2, ! G temporal resolution factor better than 1 day (Helios=1) (2 for SMEI works) & NVREST = 2, ! V temporal resolution factor better than 1 day & NGRESS = 3, ! Spatial G resolution factor better than 20 degrees(Helios=2) (4 for SMEI works) & NVRESS = 3, ! Spatial V resolution factor better than 20 degrees & NF = 1, and the above on ZIGGY below with time resolutions of 8 hours parameter (NLmaxG = 44000, ! Max # G data points (13000) (400,000) & NLmaxV = 2000, ! Max # V data points (2000) & NGREST = 3, ! G temporal resolution factor better than 1 day (Helios=1) (2 for SMEI works) & NVREST = 3, ! V temporal resolution factor better than 1 day & NGRESS = 3, ! Spatial G resolution factor better than 20 degrees(Helios=2) (4 for SMEI works) & NVRESS = 3, ! Spatial V resolution factor better than 20 degrees & NF = 1, When the latter is done the program complains about Vtmp in mkshift.f always being bad, and so some work is needed to fix this aspect of the problem. However, on 8/4/04 with most of thje aurora cleaned from the timeseries used (43062 of them) the first of the two programs run gives a correlation of 0.882 - 0.930 with most of the discrepancy coming from the regions to either ends of the main peak that shows up well and at about the same amplitude as the in-situ density. The correlation varies according to how well resolved the time peak is with the better correlations coming from in-situ averages that are less well resolved (0.5 as opposed to 0.3). At 0.5 in-situ resolution, the correlation is best but the in-situ peak is lower than the model peak by about 20%. On 8/4/04 I found an error in Mkshiftdn.f Ever since times were placed in terms of days the search for shifts should have gone to earlier times but instead went to later times at the lower level. This was normally not a problem since the time normally began at the same value as the lower level and this was within the range of the time step from one level to the other when the steps were large. Slow velocities might have made an error here. However with short time steps now used, slow material shifted too far from the zeroth time from the lower level. From my calculation, this error may have existed since 4/22/02, and since 5/20/02 for ipsd2020.f and surely from the time that double precision input times were input as times on 11/7/03. In any case, this surely fixed the problem I was having with the program complaints about Vtmp in mkshiftd.f!!!!!! This is great!!!! On 8/4/04 the program was run through with the mkshiftdn.f an while there were few complaints about Vtmp the program did not give as good correlations as before ~0.7. The peak is higher than it is supposed to be and it lags some as if the aurora now gives trouble at the time of the CME arrival. On 8/6/04 I even more strongly edited the aurora from the SMEI data leaving only about half of the sources. This seemed to make the density peak peak at the approprate location. The same run was made on ziggy as on cass183. The two programs begin to diverge at about the third iteration, and the EA files are not to be the same, though pretty close. From 8/6/04 to 8/9/04 I used strong auroral editing and eventually found that of 659 locations initially only about 300 survived. The remaining ~32,000 lines of sight were used to determine 7x7x.5 resolution and even 5x5x.33 resolution (on ziggy). Basically I discovered that I needed to degrade the digital resolution (with 32,000 ines of sight) even with the 7x7x.5 resolution to have that few lines of sight in a 2-week period. All of the runs produced a peak slightly higher than the in-situ and usually slightly following the in-situ. The velocity was always lower than the in-situ. The best in-situ fits so far with the corrected mkshiftdn.f were with the digital 7x7x.5 resolution degraded to 10x10x.75 On 8/10/04 I found one of the primary reasons that the so few time series survived was that the first cut was too strong. Now the first cut is set at 20 ADU as opposed to 5. The first cut was eliminating features in the antisolar direction that varied over large amplitude caused by too large removal of the Gegenschein. Now if 659 locations are input ~463 and ~52527 lines of sight survive until the end, and if 1002 time series are input 713 survive along with 80751 valid lines of sight. On 9/17/04 I modified smeiipsdtd.f to include limits on the density and velocity in the anti-Earth hemisphere similar to the newest versions of ipsd2020.f. I set the limit for D at 90 degrees just to check to see what happened. When I did this the program now forces the SMEI data to operate so that somewhat more mass is forced into the Earth hemisphere, but it does not solve the basic problem of a lot of mass being placed into the file just where the SMEI inner boundary resides. I think I still need a filter or taper there so that there is no runaway at the boundary of the 3D volume where the volume goes zero. On 10/16/04 I copied the working version of the October 2003 time period smeiipstd.f program over to this directory to modify it so that it can input data in time from SMEI one orbit at a time. I can perhaps also work on the problem stated on 9/17/04 above On 12/23/04 (and actually much before) this program now works to input data from one orbit at a time. On 11/03/05 the program that runs runs with the following command line: ./smeiipstd nagoya=nagoya,/home/bjackson/dat/nagoya/yearly smei=points_0005_"(20)",/home/bjackson/dat/smei/ On 02/20/06 The program on this new cass183 machine has had its mk file altered to run on f77. It should operate with the command line: ./smeiipstd nagoya=nagoya,/home/bjackson/dat/nagoya/yearly smei=points_0005_"(20)",/mnt/storage/smei/data1/base/ On Oct. 1, 06 The program on this new.cass183.ucsd.edu machine has ./smeiipstd nagoya=nagoya,/home/bjackson/dat/nagoya/yearly smei=points_2005_"(20)",/home/bjackson/dat/smei/jan_2005/ On Oct. 7, 2006 the program now crosses to a new year in the mkpostd.f routine, but does not yet in the rest of the program. On Oct. 10, 2006 after much work I have been able to get the program to cross the year end boundary and work in the rest of the program. The main fix to the mkpostd.f subroutine was always fine, and this will need to be fixed in the IPS tomography to go across the year-end boundary in the future. The second fix needed is in the main program and specific to the mk_base.f subroutine. This subroutine uses two constants, nTminTS and nTmaxTS to limit the indexed times that are used to determine the base density and velocity values. These constants were not calculated correctly in the main program as the year-end boundary was crossed. I put in a kludge to fix this problem in the main program, and it now works. Since there is no counterpart to the mk_base.f subroutine in the IPS tomography, I expect there will be no need to make this modification there. On ~April 12, 2007 I modified the smeiread.f program to read in the points data that now potentially has an orbit and orbit fraction in them rather than the seconds in the orbit information that was in the points data previously. The timeseries-making program now outputs this format to the points data files. In this way all the information about orbit and orbit fraction can be kept internally in the file for each source rather than having a mixture of the information - the start time of the orbit kept in the file name, and the fraction of the orbit kept in terms of seconds in the data file itself for each "source". The smeiread.f subroutine did not seem to work as I had obtained it from the cass183 computer and input it to new.cass183. ./smeiipstd nagoya=nagoya,/home/bjackson/dat/nagoya/yearly smei=points_2003_"(20)",/home/bjackson/dat/smei/may_2003 (works 04/13/07) On April 17, 2007 I discovered the nv3f files do not seem to give the correct answer at the Earth when run in Paul's IDL programs. The extractd.f subroutine gave ~the same answer for the May 2003 data period (CR 2003.0 - 2004.0) using the new timeseries analysis, but the Extractdn.f subroutine at the end did not give a good answer. This problem was traced on April 17, 2007 to an error in the smeiipstd program that seems to have been there from the beginning in that the line, RRSCON = (R1AU/RSS)**FALLOFF (bad line) near the end of the program should have been, RRSCON = (R1AU/RRS)**FALLOFF On April 17, 2007 the Extractdn.f subroutine was changed sucessfully to give a file named e3_XXXX.XXX at earth rather than ea_XXXX.XXX. The call write3D_infotd3DM.f subroutines still do not seem to give the "correct" nv3f files, at least according to Mario Bisi who has downloaded the files from the new.cass183 computer and analyzed them with the IDL routines he has to extract the timeseries at Earth. On April 18, 2007, Paul discovered the problem and why the IDL program crashes using the nv3 files. One nv3 file only was produced with an incorrect region of interest in the header in the subroutine write3D_infotd3DM.f and this is traced to the subroutine XMAP_SC_POS8.f. Perhaps tomorrow a fix will be forthcoming. On ~April 20 Paul found a problem with mktimes.f that I fixed in these analyses, too, having to do with the leap year analysis in this subroutine. On April 20-24, I installed a third data set analysis output for the SMEI tomography that allows the best filled 3D arrays that we know how to build. These are written out as nv3o files. Hopefully these will provide good boundary conditions for the 3D-MHD modeling. On April 23 I found that VMAPD was not being filled in the final data analysis in the SMEI tomography, and on April 24 I found how to fix this problem. There was an error in that neither VMAPD for the nv3f or the nv3o files was being filled at the end of the program, and the velocity data (not the density data) were not placed correctly into the files following their "conditioning". This was only for the case that the density and velocity data do not have the same dimensional size, as I discovered the SMEI tomography I was running did not. On February 28, 2008 we discerned that from the last August SPIE 1 S10 = 0.46 ADU, or 1 S10 = 0.552 ADU assuming 20% more electrons. This was placed into the read_smei8.f subroutine on new.cass183 and a run of the May 2003 CME period (CR 2003.0-2004.0) was made with a constant vel = 800 km/s. On May 7, 2008 I modified the smeiipstd program to use gridsphere2D with the the last parameter set ot 0.0 rather than 90.0. This was done in the main program throughout. The main program was renamed smeiipstd0 and placed in the subdirectory smeiipstd0 off developing. On May 9, 2008 I further modified the following subroutines: mkshiftdn.f to mkshiftdn0.f mkdmaptdn.f to mkdmaptdn0.f so that they now have 0.0 rather than 90.0 in their gridsphere2D calls On May 9, 2008 before the secong write3dinfo, I placed the following call into the main program to smooth the regions over the poles: call GridSphere2D(ALng,nLng,nLat,1,DMAP(1,1,N),CONRD/2.0,0,0.0,0.0) ! Change 5/9/08 to smooth polar holes On June 15, 2008 I modified a version of write3D_infotd3DM to now output higher resolution 3D marticies. The new subroutine is called: write3D_infotd3DM_HR.f. The main program smeiipstd0.f is modified to include this version of the subroutine that now has three new input parameters, and the high resolution output 2D and 3D matrices required to provide a smoother output from the 3D reconstructions. The lowest level is still linearly interpolated, as are all the levels and all the spatial points, but these are done from the shift matricies XCshift, and the change from base 4D files DVfact and DDfact, where the changes are hopefully not so large between the original digital location points. We will now see if this helps eliminate the wiggles shown in the movies as the structures move outward from the Sun. Before, a temporal interpolation was the only one done, and this helped when the spatial resolution was larger. The increased resolution is interpolated at more additional locations set by NintLng, NintLat, NintHt, and the old value NinterD. All four of these parameters are now set to 3. Since this is a display and an extraction at the Earth and inner planets only in these first attempts, the height is also able to be set according to the numbers of heights wanted, and this requires another parameter nMapHR. On June 17, 2008 after considerable fussing I certified pretty well that the 3D matrix written mirrored the lower resolution one, and gave the same general values. A single matrix at 1.5 AU is 91 Megs in size, and calculating and writing these matricies at the orginal temporal resolution takes perhaps 12 hours. On ~July 1, 2008 I was able to incorporate a working version of iReadOotyn8, and iProcessOotyn8 into readvipsn8. These use student John Clover's re-written version of the Ooty data files that have been made available from Mano by Mario Bisi. On July 3, 2008 these now are in place and are used to provide very superior high resolution nv3 files for output. I also added the ability to output an abbreviated version of this file at a single height at the end of the analyses, so that a completely filled file could be output. There are now additional inputs to the main program that allow these subroutine calls as needed. On July 3, I also removed all the reads and information about the Helios 1 and 2 spacecraft from the smeiipstd0 program. The ireadhelios subroutines had already been removed from the main program so that this program no longer worked with the Helios data. This required the removal of considerable sections of the program. Thus there were tests on the smeiipstd0 with a changed name to smeiipstd0_NHC (No Helios or Cambridge). As soon as I certify that this new program operates the same as the old smeiipstd0 routine, I will name the new version of smeiipstd0 back to its original name. On July 3, 2008 I also slightly modified the readvispsn8.f subroutine. This needs to be installed into ipstd_0 now that more precision is required of the ipstd program to read in source observation times in double precision. On July 3, 2008 I also modified the extractn.f program so that it now allows an input of the names appended to each output file. This is needed to allow the main program to add new objects. I also included a querry at the beginning of the program asking which of these bodies you want to have an input parameter file for. The objects so far are: character cPrefix (NTP)*2 /'ME','VE','EA','MA','JU','SA','UR','NE','PL','UL','H1','H2','SA','SB'/ character cPrefixf(NTP)*2 /'m1','v2','e3','m4','j5','s6','u7','n8','p9','u1','hA','hb','s1','s2'/ Stereo A and B do not have ephemerides available in Fortram yet. On July 4, 2008; The extractn.f subroutine now also allows input of the extraction interval. Originally this interval was set at 6 hours, but it is now set at the NinterD interval (input in the calling sequence) used for the nv3 file production. On July 4, 2008, since the numbers of observed V values checks out for a given interval as does the extractdn subroutine, I have renamed the much-modified smeiipstd0_NHC program back to smeiipstd0 and recompiled it. This will now be the standard. On July 4, 2008, I discovered the mkshiftdn0 subroutine was not named as used in the smeiipstd0 main program. I now have done this, and with the mkdmaptdn program. The versions on cass183 had the proper gridspheres installed OK. On July 5, 2008 I discovered that the extractpositionn8.f subroutine was at the end of the extract.f subroutine. When I dropped the extractd.f subroutine from the mk_smeiipstd0 calling sequence, the extractpositionn8.f subroutine could no longer be found. I have now placed the extractpositionn8.f subroutine at the end of the extractdn.f subroutine. On July 5, 2008 I discovered that the mkvmaptdn.f subroutine also had a gridsphere2D in it that needed to have its last parameter set to 0.0 rather than to 90.0. I fixed this, renamed mkvmaptdn.f to mkvmaptdn0.f and recompiled smeiipstd0. On July 6, 2008 I discovered an error in calling extractdn.f; FALLOFF and NinterD were reversed in the first of the two calls to extractdn.f On July 28, 2008, because the higher-resolution "nv3h" files often show higher-resolution features that are present in the kinematic model, I have modified the extractdn.f program to output STEREO data. This required a new subroutine, stereoorbit.f This new subroutine contains the current STEREO ephemeris from NASA. The stereoorbits.f program was discovered not to work, and to not agree with the runs made when STEREO locations were extracted in using Paul's IDL program. The reason for this was found in the subroutine kepler_orbits called by the stereoorbits.f program. There was a different definition of some of the orbital parameters in this latter subroutine. Paul fixed this, but it was also discovered that the Fortran version of ulyssesorbit.f has always been wrong, and so that if ever this was used it would not give the correct location for Ulysses. This is now fixed in both ulyssesorbits.f and stereoorbits.f and checked to agree both with the IDL routines, and for STEREO with the NASA ephemeris. The ulyeeseorbits.f subroutine is only strictly ood for the first Ulysses solar pass, and needs to be revised to include better parameters on subsequent ulysses passes including the current pass. The first attempts to provide STEREO data subtractions have been made using the incomplete IPS data from STELab, and the ipstd_20 program. This shows that this stereoorbits.f subroutine appears to work. More checks are currently being run. On about August 1, 2008, I found that John had modified the readvips routine to incorrectly read Ooty data. I modified the readvips8.f program to read Mano's current data set correctly. This provided really good corelations of velocity with Mano's Ooty data set. This new routine was installed into the program. On about August 15, 2008 I modified the readvips8.f and main program and added two subroutines to the main called writegoodsourceb.f and writegoodsourcev.f. On request, the program now writes out files that duplicate the input files in every way except that the lines of sight the smeiipstd0 program thinks bad are flagged bad. These output files can then be read by IDL routines that place lines of sight on sky maps. On about August 18, 2008 Mano's sent new Ooty data (from 2007) with a different format. I modified the readvips8.f routine to read Mano's new Ooty data set format. The program still outputs the good source Ooty files with the old format for pickup by the IDL routines. These new Ooty data obtained at solar minimum do not give the same good velocities as do the data set from 2004. On about August 23, 2008 Mario discovered that the ipstd_10 program bombed when it was asked to provide Ulysses files. This caused a rewrite of the input to the extract routine to: character cPrefix (NTP)*2 /'ME','VE','EA','MA','JU','SA','UR','NE','PL','Ul','H1','H2','Sa','Sb'/ character cPrefixf(NTP)*2 /'m1','v2','e3','m4','j5','s6','u7','n8','p9','u1','hA','hb','s1','s2'/ On September 8, 2008 I added parts to the main program and to mkdmaptdn0.f and mkvmaptdn0.f (now called mkdmaptnd0n.f and mkvmaptnd0n.f) to on request write out er1_ and er2_ files that show 3D confidence levels that can be images similar to nv3 files. The er1_ files contain the composite line of sight crossings that are used to determine (or not) that the region has beed deconvolved. This array was always brought out of the two above subroutines, and now it is used as input to the write3d_infotd3Dm.f routines. The write3d_infotd3Dm.f routine needed to be modified to not allow the normal density and velocity with distance factor to be multiplied to the density and velocity as is done for density and velocity. Before use the array values are modified to reflect the Gaussian temporal and spatial filters used. The er2_ files are the composite weights on the source surface, and these were never output from the mkdmaptdn0.f and mkvmaptdn0.f routines. These arrays contain information not only about the weighting functions, but also about the densities and velocities along the line of sight used to weight the source surface. They look not only something like the line of sight crossings, but also like the densities themselves. Before use these array values too are modified to reflect the Gaussian temporal and spatial filters used. On 02/11/2009 an error was discovered in the ipstd_20 program data that provides the reconstructions. The lines of sight on the Carrington map seem shifted by about 120 degrees from what they should be. We do not know when or where this error crept into the program but suspect that perhaps the changes to the read routines may be the cause on 08/01/08 when the readvips8.f routine was modified. If so perhaps the readvips8.f routine for this program is suspect as well. The current version being compiled is named readvipsnn8.f and called as readvipsn8.f. On 02/13/2009 I found an error in the mkpostd.f subroutine. This would give an error in the case of a year crossing. When a year was crossed the Doy8 used in ECLIPTIC_HELIOGRAPHIC8 would not have been correct. The value of tXC was corrected for a year crossing correctly as long as Adaybeg is correctly set as the total number of days in the preceeding year. This is now fixed in the mkpostd.f routine. There was also an error in the main ipstd_20n.f program initialization of Idaybeg, but this was not in error in this program. On 02/13/2009 a presumed error in the mkpostd.f subroutine was fixed where the new DOY was crossed and the DOY then needs to have the previous numbers of days added but 1.0d0 subtracted from the new DOY since the DOY begins at 1.0. Following 02/13/2009 the error fixed on this date was certified to allow the year crossing correctly. On 03/3/2009 a renamed version of this program smeiipstd0n_intel.f was compiled successfully with 500,000 lines of sight using an intel compiler, fort on new.cass183. The version of the program did not run after the input to obtain the nagoya velocities, but a version with 400,000 lines of sight did compile and run. In the process, the mk_base.f subroutine was found to have an error (format bug) and fixed, and the extractn.f subroutine was also found to have an error. Otherwise the program now runs with the intel compiler (using mk_smeiipstd0n_intel), and no extractn.f routine with the intel compiler. On 03/04/2009 the error in the extractn.f routine was traced to the bGetLun function call that now has two arguments rather than one. Paul suspects the newer intel library function on new.cass183 has this, and it is not in the g77 compiler on my machine. If so the g77 compiler can not be used with the same version of the extractn.f as the intel version - bummer. Since I do not yet want to give up on the g77 compiler yet, or replace the newer library version of the g77 compiler on cass183, I need to make sure the old version of extractn.f is still available for the other routines, and I will need to rename this version for the intel compiler, to extractdn_intel.f. On 05/14/2010 I transferred the readme_smeiipstd0.txt routine to this directory (smei_in-situ_0n) on my UCSD desktop computer. This is in preparation to making the smeiipstd program include in-situ measurements in its analyses as has been done for the ipstd_20n routine. The date on the smeiipstd0n.f routine is 3/13/2009. This old version of the program has at least one known problem in that the year crossing does not work correctly for non-leap years. The jump from 2009 to 2010 is one example of where this jump does not work correctly. On 05/14/2010 the changes to the smeiipstd0n_in.f routine and those called by it will include: smeiipstd0n.f --> smeiipstd0n_inp.f mklosweightsm.f aips_wtf.f readace8.f mkpostd_in.f mkvmodeltd_in.f mkvmaptdn0n_in.f fixmodeltdn.f mkgmodeltd_in.f (mk_dmodeltd_in.f) mkdmaptdn0n_in.f ON 05/17/2010 the changes, mostly to smeiipstd0n_inp.f and mk_dmodeltd_in.f have been completed. An additional subroutine: readace_b8.f was provided to more efficiently use the arrays needed for the in-situ densities associated files. The program has been tested (05/18/2010 and earlier) using: smeiipstd_inp nagoya=nagoya,/home/bjackson/dat/nagoya/yearly smei=points_2003_"(20)",/home/bjackson/dat/smei/may_2003_08_ns and the default settings for this program that were set to begin at CR 2003.3 On 05/18/2010 the smeiipstd0n_inp.f program now seems to work to converge to a solution of the analysis using the in-situ velocity and density measurements. It also works to converge immediately to fit the base density using a fit to the in-situ densities. This fit from 5.0 has changed to 3.28415775 using 1294 in-situ measurements. There are no in-situ sources removed in the iterative process. The in-situ density measurements are from the ACE Level 0 measurements. The comparison tests of the program shows correlations of; EA_2003.000, Vcorr = 0.9???, Dcorr = 0.9??? e3_2003.000, Vcorr = 0.9???, Dcorr = 0.9??? E3_2003.000, Vcorr = 0.9???, Dcorr = 0.9??? On 05/18/2010 here are still two known problenms with this program. The in-situ density reader does not allow other than a pre-defined in-situ data file to be read, and this also does not allow going from one year to the next matching in-situ data. The program also does not allow the correct day to be reconstructed if the time is at the very beginning of the year and the actual beginning of the reconstructed volumes start on the previous year. On 5/7/11 I began the process of getting this program to run on the Bender intel compiler. The new version of this program is called smeiipstd0n_inp_intel.f On 5/8/11 The program smeiipstd0n_inp_intel now compiles on Bender. Supposedly all the fixes incorporated into the smeiipstd0n_intel2 program have been incorporated into this version that uses ace level 0 data to help analyze the smei tomography. On 5/8/11 I realized that the two new ace reading routines used in this program do not read from the zshare directory. This is now fixed. On 5/8/11 I'll be danged! The program now seems to be running and fixing a base to the ACE Level 0 data. The base is somewhat negative -1.8 or so, but there are few sources below zero in MkDHmodeltd_in.f. On 5/9/11 I was able to get the program working in "forecast" mode On 5/18/11 there was an error discovered in the main program. The primary error was caused by the error limit in the main program asking if asking "Do you want the density error to limit HR densities?$no',bDdener" bDdener was set correctly but bVdener was set to .TRUE. and never accessed. When the error files were not used bWrerr set to .FALSE. this caused the bVdener in the high resolution writes to severly limit the volumetric data. This is now fixed in the main program and the production of good limit files is now still made even if bWrerr is set to .FALSE. ******************************** On 4/1/2020 I made a main program: smeiipstd0n_inpv20.f and will attempt to make it more compatible with the more modern version of ipstd_20n_inp_mag3_v20.f On 4/1/2020 the first modification was to provide a better way to input in-situ velocities and densities. On 4/1/2020 I compiled two new program versions: smeiipstd0nv_inpv20.f smeiipstd0nvhr_inpv20.f On 4/3/2010 I was able to show the smeiipstd0nv_inpv20.f program ran to provide images. On 4/6/2020 I got the ./smeiipstd0nhr_inpv20_intel executable going with Luke's help to provide the ability to incorporate magnetic fields. On 4/6/2020 I found a problem in that the base of the brightness is set too low in the reconstructions when the ACE data are fit. I fixed this so far by not fitting the high densities suggested because of the least squares variations to the mean which are large for this rotation. The mean variations are large mostly because of the very large peak of densities in this Carrington rotation (2003) from the May 30, 2003 CME arrival. The current system, however, finds many brightness values (perhaps one-third below zero and does not use them, and so this seems a problem. I am not sure of the fix to this. Densities go below zero, but only because the base used is removed following the reconstruction process. I am not sure this is a good thing to do, but it is what seems to have been done in the past. On 4/8/2020 I began modifying the program to accept the newer IPS write3d_infotd3dM_HR_3.f subroutine of the IPS analysis for velocity and density analysis, and this worked for the nv3h files. The new write3d_infotd3dM_HR_3.f subroutine is named: write3d_infotd3dMBER_3.f On 4/9/2020 I was able to try this in high resolution with the ./smeiipstd0nhr_inpv20_intel program and it worked successfully. At the same time, I had the idea that because the writes for these large files take so long for the nve and base files that I did not need to write all out, but only those near a feature of interest. Thus I modified the input questions and the write3d_infotd3dMBER_3.f subroutine to accept up to three intervals to write out, and this began to work on this day at low resolution. At the same time I began to modify the main program to accept the same write3d_infotd3dMBER_3.f subroutine for the more nearly filled base data. With mistakes in doing this this also began to work. The non field extract routine was also transferred over at the same time, and runs successfully. At the same time I also partially transferred over the inputs to allow the magnetic field inputs with this new system program. All now works in both the smeiipstd0nv_inpv20.f and smeiipstd0nhr_inpv20.f main programs. The smeiipstd0nhr_inpv20.f deconvolves data at 6-hour intervals, with many LOS to spare, but takes a little over 8 hours to complete on 4/10/2020. Thus, I will now try an even higher resolution reconstruction. On 4/10/2020 I began running the program: smeiipstd0nvhr_inpv20.f at: NGREST = 8 (three hour time intervals, interpolated every hour - two intermediate steps) NGRESS = 8 (rotates one resolution element every 14.3/8 ~1.8 hours) dRR = 0.02 (at 400km/s goes 1 AU in 4 days, and there are now 151 in 3 AU, or 50 in 1 AU or 50/4 = 12.4 per day; about a 2 hr res.) and the program began to run at about 9 am 4/10/2020. At 10 am the program has gone through the 0th iteration. It takes 8.5% memory. Thus I will now try running the program: smeiipstd0nuhr_inpv20.f at: NGREST = 12 (two hour time intervals, interpolated every hour - one intermediate step) NGRESS = 8 (rotates one resolution element every 14.3/8 ~1.8 hours) dRR = 0.02 (at 400km/s goes 1 AU in 4 days, and there are now 151 in 3 AU, or 50 in 1 AU or 50/4 = 12.4 per day; about a 2 hr res.) and the program began to run at about 10:20 am 4/10/2020. At 11:30am the program has gone through the 0th iteration, and is part way through the first. It is only still taking only 8.5% memory for this version, so the memory must be allocated for each time step singly. This was restarted at 1:00pm because it was giving answers identical to those of the smeiipstd0nvhr_inpv20.f program above, and I cannot see how this can be the same. We will now see if this is the same again. However, it is giving same answers, and also has been checked to determine that the smeiipstd0nvhr_inpv20.f version is also different. Humm, this is very interesting. Additionally, this and the uhr version are not showing any LOS below a lower limit, and this is also somewhat strange. I guess we wait and see what really happens to the output that is supposed to have 1.5 times the temporal resolution, but no different spatial resolutions. On 4/11/2020 I found the above two analyses did not work, nor did the ./smeiipstd0nhr_inpv20_intel program run prior. Something has gone drastically wrong in the analysis. The basic program version smeiipstd0n_inpv20.f compiled Friday 4/10 with all new things is tried in v20_test4 to see if this works. On 4/11/2020 I found that the smeiipstd0n_inpv20.f program ran correctly and gave the appropriate answers for this program. I then began to modify this version of the program to include a revised subroutine: write3d_infotd3dMBE_HR_3.f The subroutine and changes in the main program limit the output velocity and density 3D files to within certain time limits so that their outputs do not take so much time. This allows the program to complete in a reasonalble time. This worked well, and has been incorporated to write output files in in the main program for both the nv3h files and the nv3b files that are supposed to be more completely filled. On 4/12/2020 I also revised another subroutine to: write3d_bbtmBE_HR_3.f to be used to limit the output of the magnetic files that can be output. This never worked in the current smei main program I am using, and I believe never ever worked at all in the smei program. I tried to test this with the current get_bbt_3.f subroutine, and it did not work, and so the write3d_bbtmBE_HR_3.f was untestable. However the main program does work satisfactorly up to the limit the subroutine inputs and so if the read program can be made to work, perhaps the current nsf solis files that seem to exist will eventually be able to work as do the gong files. On 4/12/2020 I modified the currently-working smeiipstd0n_inpv20.f program to now run as: smeiipstd0nhr_inpv20.f (5.6% memory, accessed Mon 13 Apr 2020 04:18:34 PM PDT) and smeiipstd0nvhr_inpv20.f (8.5% memory, accessed Mon 13 Apr 2020 05:02:51 PM PDT) as above, and now both seem to be running correctly unlike before. So, we will now see what happens. and smeiipstd0nuhr_inpv20.f (10.4% memory, accessed Mon 13 Apr 2020 07:43:36 PM PDT) with NGREST = 12 (two hour time intervals, interpolated every hour - one intermediate step) NGRESS = 8 (rotates one resolution element every 14.3/8 ~1.8 hours) dRR = 0.01 (never tried before or never ran) as above, and now seems to be running. But, the analysis didn't work. The extract routines do not use the below, nor do the 3-D analyzers. if(NGREST.eq.3) NinterDD = 3, a file is written every 2 hours if(NGREST.eq.4) NinterDD = 2, a file is written every 2 hours (smeiipstd0nhr_inpv20.f) if(NGREST.eq.6) NinterDD = 3, a file is written every 1 hour if(NGREST.eq.8) NinterDD = 2, a file is written every 1 hour (smeiipstd0nVhr_inpv20.f) if(NGREST.eq.12) NinterDD = 1, a file is written every 1 hour (smeiipstd0nUhr_inpv20.f) On 4/19/2020 I found a problem with the higher-resolution analyses in that an if (.not..or..not. statement right after the MkTimes.f subroutine in the main program was incorrect, and limiting the NTG data to the same number of time steps as the NTV data. This caused the immediate problem with the analysis in that the outputs of velocity and density did not have the correct number of outputs. This now seems fixed in that the extract and nv3h, and nv3b data are not output by the IDL. However, several other problems have now resulted. In the runs, more than half the LOS are above an upper bound and are removed, and also the tomography seems to have only reconstructed the first half of the data in time. I suspect that there is some other analysis limit not set correctly at the beginning of the main program. On 4/20/2020 I found one major problem of the above. Simple, the mkshift routine was only being used before or following velocity, and this meant that dfac was not provided at the resolution of the density (this was not needed for the version where both density and velocity had equal temporal resolution. At one time I had only considered g-level analysis, not brightness. Thus, the program now seems to work to provide an analysis that completes, albeit with still considerable loss of lower brightness LOS. Now: For CR2003 -smeiipstd0n_inp-I-Info, good B observations 3340356 -smeiipstd0n_inp-I-Info, good V observations 253 -smeiipstd0n_inp-I-Info, good in-situ D observations 1294 -smeiipstd0n_inp-I-Info, good in-situ V observations 1294 The first: smeiipstd0nhr_inpv20.f (6.4% memory, from Mon 20 Apr 2020 05:49:35 PM PDT - Tue 21 Apr 2020 07:38:20 AM PDT: time = 14:49:45) (8.154 times more redundant, removed 10 V sources and 81906 B sources, iteration 18 1370 LOS above 1233916 below 1578974 remain, 860 V L.O.S .removed, 527490 SMEI B removed, 0 in-situ densities removed, EA extract OK, e3 extract OK, completed. the EA (D 0.970, V 0.801) and e3 (D 0.952, V 0.791) fits at 0.20 seem excellent in density for the whole EA (D 0.995) e3 (D 0.995) also for 0.21 for the May 30 CME from 5/28 to 6/2, the nvh* and nv3b* files had many bad writes in density, as Bad write3d Density e.g. 588 85 193 -1.7014117E+38 18449.79 -5.999660 1.000000 but these seem to have come out OK.) and second: smeiipstd0vhr_inpv20.f (14.4% memory, from Mon 20 Apr 2020 10:17:38 PM PDT - Wed 22 Apr 2020 08:24:31 AM PDT: time = 33:07:53) (2.306 times more redundant, removed 11 V sources and 84234 B sources, iteration 18 1370 LOS above 1233916 below 1578974 remain, 861 V L.O.S. removed, 191 in-situ densities removed, 524071 SMEI B removed, 1 in-situ densites removed, EA extract OK, e3 extract OK, completed and gave one-hour cadences and these have higher resolution than smeiipstd0nhr_inpv20.f above, but do not seem to fit the ACE data as well, the EA (D 0.963, V 0.835) and e3 (D 0.961, V 0.855) fits at 0.20 are for the whole are excellent as above, the EA (D 0.956) and e3 (D 0.954) fits at 0.10 are somewhat decreased in goodness for the whole, the values for EA (D 0.963) and e3 (D 0.963) also at 0.19 and EA (D 0.957) and e3 (D 0.957) at 0.09 for the May 30 CME from 5/28 to 6/2, this mostly due to a large spike on the descending flank of the CME peak following the main May 30 peak that does not appear in the ACE data at low resolution, but does in both ACE L0 and wind data (somewhat more prominently) at high resolution. The IDL does not work for the data at delt=0.0, however, The nv3h* and nv3b* had many bad writes in density, and these did not get interpreted correctly by the IDL. The time taken at the last was mostly because of the many hours spent in writing out the 3-D data for each hour (2.7 Gbyte each), and this when loaded into Pluma bombs the Pluma program.) The header for the Nv3hfile that crashed is: ; /home/bjackson/soft/for/smeiipstd0n_in-situ/v20vhr_test/nv3h2003.6011_00001 created on 2020/04/22 02:23:55 ; Bad value flag: -9999.9990 ; T3D_header, v1.04 FOR ; File name prefix: nv3h ; Universal time: 2003:149:12:00:00.000 ; Carrington offset: 1998 ; Carrington time: 5.6011 ; Carrington limits of array (start/end): 4.0278 7.0278 ; Carrington limits of region of interest (start/end): 5.1011 6.1011 ; Carrington start time: 120.2500 ; Carrington time resolution: 4.16667E-2 ; Carrington forecast time: 0.0000 ; Latitude range (degrees): -90 90 ; Radial reference distance (AU): 6.97861E-2 ; Radial resolution (AU): 5E-3 ; Power index of density dependence of density fluctuations (V-data,G-data): 0.35 0.35 ; Power index of radial dependence of density fluctuations (V-data,G-data): 0.3 0.3 ; Density at 1 AU (cm^-3): 3.641 ; Spatial filters for smoothing velocity and density (degrees): 14 1.75 ; Spatial filters for filling velocity and density (degrees): 1 1 ; Temporal filters for velocity and density (days): 0.75 9.375E-2 ; Clip longitude (degrees): 90 ; Dimensions (nLng,nLat,nRad,nTim): 1729 289 289 440 ; Iteration: 19 / 18 ; Time index: 440/ 440 ; # Velocity lines of sight: -2147483648 ; # G-level lines of sight: 0 ; # Line-of-sight segments for velocity: 0 ; # Line-of-sight segments for g-level: 0 ; Line-of-sight resolution for velocity: 0 ; Line-of-sight resolution for g-level: 0 ; # segments/bin for velocity: 0 ; # segments/bin for density: 0 ; Linear scaling constants: 0 0 0 0 ; Power for radial normalization: 0 2 ; Rotation counter: 1 ; Velocity (km s^-1)/normalized density (cm^-3) (and now the data begins) and third: smeiipstd0uhr_inpv20.f (28.7% memory, from Tue 21 Apr 2020 09:04:29 AM PDT - Tue 21 Apr 2020 ~11:00 AM PDT : 2 hrs) (1.537 times more redundant, after Mkshiftdn0n for density crashed with segmentation fault. and third: smeiipstd0uhr_inpv20.f (28.7% memory, from Tue 21 Apr 2020 06:12:26 PM PDT - Tue 21 Apr 2020 ~08:10 PM PDT) (1.537 times more redundant, after Mkshiftdn0n for density about 24.3% memory crashed with segmentation fault. and third: smeiipstd0uhr_inpv20.f (28.7% memory, from Wed 22 Apr 2020 07:25:46 AM PDT - Wed 22 Apr 2020 09:15:46 AM PDT) (1.537 times more redundant, a new commented version of get4dval.f has been developed called get4dvalm.f This version has a slightly larger F4D variable defined in case this will work. NN is 1 in this instance, and it is my suspicion that the integer variables here do not match the approximate values in the original set up of L.O.S. measured values. On 4/25/2020 I was able to solve the problem that was present with the get4dval.f subroutine that called flint. The held up the analysis of the smeiipstd0uhr_inpv20.f because of a segmentation fault on the ~105947303 access of flint, about one-fifth of the way through the number of accesses of the flint function on Leela. By providing a new subroutine, get4dval3.f This subroutine provides an access of each LOS in get4dval3.f rather than consecutive accumulative accesses of each LOS one after another, and this divides up the ultra long sequence of numbers by each LOS without a segmentation fault present. I don't know why the original subroutine would not work, but this does. The new routine was needed for both each LOS on for brightness and for the in-situ values. On 4/27/2020 there is another problem discovered with the smeiipstd0uhr_inpv20.f program. The program now claims almost all LOS are have too large a brightness, and throws almost all brightness LOS away. For some reason the extra density included is too low. This has become increasingly worse with each higher-resolution system, indicating that some automatic adjustment of the extra density is beingnormalized incorrectly. Thus, I need to fix this somehow. On 4/28/2020 I attempted to go back to basics with the smeiipstd0n_inpv20.f program version to find why there were so much data removed from the regular analysis because they were out of bounds in the data. I got a revision of the program to run with many inclusions of data written out, and the program finally ran, but in this the EA and e3 files did not fit the ACE densities as well as before. Thus, I will now begin to remove some of the program original edits, and try just to include the written-out data. On 4/29/2020 at ~9:30am all of the smeiipstd0uhr_inpv20.f programs finished with a segmentation fault just after writing out the EA files. v20uhr_test: Sun 26 Apr 2020 11:19:08 PM PDT - Tue 28 Apr 2020 06:30:13 PM PDT: 43h 11m v20uhr_wind: Mon 27 Apr 2020 08:36:52 AM PDT - Wed 29 Apr 2020 04:12:52 AM PDT: 43h 36m v20uhr_noi: Mon 27 Apr 2020 08:43:57 AM PDT - Wed 29 Apr 2020 04:37:50 AM PDT: 43h 55m The EA files were written at one-hour cadences over the appropriate times, but the densities were in the thousands, if there, and the velocities only somewhat accurate at the very beginning of the interval for the v20uhr_test version of velocities above. Ugg! On 4/29/2020 I retuned to an earlier version of the smeiipstd0n_inpv20.f program, and got back the basic analysis run in a version of v20_test7. Now I will remove the BASEOBS value that I do not see should be in the program, to see if this makes the program work better. Removal of the BASEOBS two locations did not change the iterations at all. This also did not seem to change the time series at all. When I substituted the get4dval3.f subroutine for get4dval.f it was supposed to work, but absolutely gave wrong answers, and so this explains a lot about why the earlier smeiipstd0uhr_inpv20.f programs did not work at all. So, I figured out how the darn get4dval.f function worked, or at least I guessed how it was supposed to, and I now have a working subroutine get4dval3.f (gives the same fit values as the get4dval.f subroutine works, at least for the smeiipstd0n_inpv20.f program). On 4/29/2020 I again began: v20uhr_test2: Wed 29 Apr 2020 09:46:32 PM PDT - Nope, this didn't work. The program stopped again in the get4dval3.f subroutine as far as I can determine. On 4/29/2020 I began to get the program to run with the better analysis in the ipstd program that fills the nv3h and nv3b files directly without saying it is going through the error routines. On 5/3/2020 at the end of the day I was finally able to determine mostly what I was doing the get the error files and the way ipstd_20n_inp_mag3_v20.f attempts to use the these files to restrict the nv3h* and nv3b* and magnetic* writes. A couple things: The nv3d, nv3f writes in write3D_infotd3DM_3 require the base at the inner boundary to be high and use the power as a falloff. The nv3h, nv3o, and nv3b writes in write3D_infotd3DM_HR_3 reguire inputs that are low, and assume the falloff already present. The subroutine copies V to the D cadence if you put nTV,nTG The copydtodvn subroutine copies D to the V cadence if you put nTG,nTV (By the way both subroutines are placed together in the same copyvtovdn.f subroutine.) I still haven't tried all options nv3d, nv3f, nv3h, nv3o, and nv3b options alone but together, they work, and they should work singly. I still haven't determined the BASEOBS problem, but it may have something to do with some things that have been commented out. There are problems not resolved with the ipstd_20n_inp_mag3_v20 program as far as the limits using the error files, but I have documented some of these, and need to fix them. On 6/8/2023 I began to test the smeipstd0nvhr_inpv20_intel routine to provide vhr routine to provide high resolution analyses for the Solar Wind 16 conference. Before this the programming was only tried on the May 18 2003 CME. Two other events were tried, from November 18 and October 28. These worked with great sucess to show mesoscale structure, I believe. At lest thew gave results that seemed sensible includine some sort of response in the ecliptic plane from the October 28 Halloween storm event that looks like a shock that reached 1 AU some places at about 2 UT on October 29. There are some anomalies that seem strange for this event, however. On 4/30/2024 On learning that no longer would the CCMC support the old PG Fortran that used multiple entry points and when Ben found that the one subroutine in: smeipstd0nvhr_inpv20.f that did this (as far as I know) was: readvipsn8.f misnamed as readvips8.f I changed the name in makefile and in the original version and began to attempt to rewrite the old version of the readvipsn8.f subroutine so that it now has no multiple entry points. This subroutine is called: readvipsn8_n.f and will initally be used with a if statement that only provides a read for the old nagoya data. Eventually, as long as we can only have no-multiple data points, the newer versions of the same reads that read the general format will also need to be changed to not contain multiple entry points. The newest version of the ips read program is now run in the main program and compiled with a make file as: smeipstd0nvhr_inpv24.f On 5/3/2024 Ben placed a way to access a new version of pgf77 so the tomograpgy could be compiled with this. Into .bashrc was placed "export PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/24.3/compilers/bin/:$PATH" and this allowed compilation of the program. If the window on the terminal is aready open, then rather than opening a new window type: source ~/.bashrc and this runs the bashrc script so that the compilation can be done in that window. The new pg77 compiler fails to compile the subroutine readvipsn8.f however, and this means unless a fix is made we cannot compile our program anywhere but at UCSD using our ifort compiler. A newer ifort compiler version does not work either, but gives a different error. On 5/5/2024 I followed Ben and Paul Hicks suggestion to try putting fewer arguments into the readvips subroutine. I did this by putting all the dimensioned SAV arguments into a save directory in the main program, and not including these ina new read subroutine: readvipsm8.f The SAV arguments are then dimensioned "1" in the parameter list as in the main program in the readvipsm8.f subroutine, and placed into a save location in the subroutine. This then gives identical inputs into the PG version of the program as it does to the ifort compiled program, and both can be compiled by either the ifort compiler or the PG compiler. The program was then tested using both compilers, and gave the same iterated values, and the same volumetric data as then tested using the nv3h files. On 5/6/2024 however there was still an error, in that the time series was not the same for the CR 2003 runs that are resident in the leela sub-directories in test_PG_ifort. On 5/7/2024 I discovered this error as a mistake in the extractdvdm.f and extractdvdm_3.f subroutines. in these subroutines there was a variable xxinc that was used, and placed into an added line but was mislabled xxCinc. Although the xxinc was set to zero, the xxCinc value was not initialized. This gave xxCinc a very small number in the PG-compiled program that produced a NaN, and this stopped the time series from being produced half way through the subroutine before its completion. With this error fixed, the PG program now gives a time series analyses that so far has been shown equally as valid as that of the ifort compiler. Tests are now underway to provide a more complete test of the PG compiled program to produce nv3b files and magnetic files, as well as tests to completion of the very high resolution analysis using the PG-compiled vhr program that as with the ifort program provides the tomographic result with a one-hour cadence time series.