ipsdtestR nagoya=nagoya,/home/bjackson/dat/nagoya/yearly Revisions: On the week of 03/08/01 I began modifyfing ipsdtestR using mk_ipsdtestR. Stuff before: On about 11/15/00 I discovered an error in the the way scratch arrays were zeroed. I modified the IPSDTDV2020NN.f program to use the above programs with scratch arrays placed in the calling sequence. When I did this I noticed that the scratch arrays originally used were not dimensioned properly for the 2020 program. The scratch arrays zeroed at the beginning of the program were not fully zeroed, and this was an error. When this was done properly,the 2020 program no longer converged with few holes - there were a lot of holes in both velocity and density and the results that I had come to like and agree with were no longer correct. I presume that the increased number of line of sight crossings from non-zeroed arrays were partially responsible for the previous OK results (That always seemed to be consistantly the same for any given Carrington map.) I consequently got the idea to filter the weights and the number of line of sight crossings in both time and space so that these are more consistent for only a few lines of sight in a way similar to the answers. Thus the weights and the numbers of line crossings are now smoothed spatially and temporally with the same filters as the answers. This seems to work wonderfully, and allows the line of sight crossing numbers to be placed lower than before - to as low as 1.5. At 1.5 most of the Carrington map is filled and the program proceeds to convergence (on Carrington Map 1965) pretty well. At 11/16/00, the comparisons with in-situ work almost as well as before, but time will tell. In an additional change on 11/15/00, I modified mkvmaptdN.for and MKDMAPTDN.for to increase the effective number of line of sight crossings by sqrt(1/cos(latitude)) This allows the acceptance of a lower number of line of sight crossings near the poles since the spatial filtering is greater there. This also seems to work. On 11/15/00 I also modified fillmaptN.for and copyvtovdN.for to accept scratch arrays through the calling sequence. IPSDTDV2020NN.f also needs to be modified to accept these subroutines with their new calling sequences. The convergence with the above modifications (11/16/00) seemed pretty ragged until sources were thrown out after which the convergence proceeded in a very stable fashion. The peak in density at 7/15 was absent in the density data, and this may be because of thrown sources. On 11/16/00 I then also smoothed FIXM spatially and temporally in both mkvmaptdN.for and MKDMAPTDN.for. This also converged in a ragged fashon even after the thrown sources and in the end did not write the 3 D arrays I asked - maybe memory was clobbered. The program did converge. however. The in-situ time series was not at all like the model!!! On 11/16/00 I then changed back to earlier convergence scheme where the FIXM is not smoothed. The 3 D arrays still are not being written out. On 11/16/00 I noticed that the mkvmodeltd.for routine was not as on the vax [.ipsd2020.normn}subdirectory, but was an older version. I re-wrote the newer version (which does not need a scratch file) and replaced mkvmodeltd.for with the newer version. I also checked to see that the MKGMODELTDN.for subroutine was unchanged from the vax and I revised it so that it passes two scratch arrays through its calling sequence. The newer version with iterates identically to the old version. There seems to be no change in the write3b status throughout the iterations indicating that nothing gets clobbered during the iterations. On 11/21/00 I fixed the fillwholet.for subroutine so that now there is now a scratch array is passed through its calling sequence so that the above error noticed on 11/16/00 was fixed. Things seem pretty much the same when I do this, and now the t3d files are output. On 11/21/00 I stopped using distributed weights, since the program NOT using distributed weights seems to converge as well as the program version that does use distributed weights. The EA answers are somewhat different, but not much. On 12/5/00 I found an error in the write3dinfotd1N.for subroutine in forecast mode. The subroutine seems to give two 3d files that are incorrectly labeled (and interpolated) because AnewI is wrong. I believe AnewI is wrong because the input N values are wrong in the forecast mode. The problem was in the use of XCintF where there was not an N+1 copy of the XCintGG array into the XCintF one in the main program. In the forecast mode this caused the bomb. However, I see that whatever reason that there was a forecast mode for the write3dinfotd1N.for routine, it does not exist any more. I have therefore eliminated the forecast mode for write3dinfotd1N.for in both the main program and in the subroutine so that now all that is needed is a single call to write3dinfotd1N.for. On 12/7/00 I found an error in the MKTIMESVD.for routine in that the NmidHR value was subtracted from the time to start the beginning day. In the main program, this value was labeled number of hours "before" midnight and was -10 for Nagoya. The main program now reads number of hours "from" midnight and is now -10 as before for Nagoya. UCSD is +8 or +9, as you know, depending on daylight savings time. The MKTIMESVD.for subroutine now adds this value to begin the day at midnight. This has the effect of changing all the extensions of the t3d files, since the temporal intervals now begin at a different time - midnight at Nagoya. If the t3d files are interpolated by 1 in write3dinfo1N.for, this now has the effect of dividing the day into times before midday in Nagoya and after midday in Nagoya. If the files are interpolated by 2 in write3dinfo1N.for, then the t3d files are divided (approximately) into midnight to 8am, 8am to 4pm and 4pm to midnight. The extension values have been checked by PL HE Pandora on the vax. On 12/7/00 I found that the forecast 51874 run, which terminates data at this time, or 12 UT on November 25 (355 sources found), gives the last matrix at 1970.064 (centered at 1970.0825 or 2 UT November 26). The forecast run at 51874.5 (0 UT November 26) (371 sources found) gives the last matrix at 1970.064 as well. Since this does not give even a one day 3d matrix advance forecast, I have now changed the value of NTV and NTG by one more so that the 3d matrix is written to a file for at least one day beyond the current time. On 12/7/00 I found that in the forecast runs there were as many sources used as there were source values within the NLG and NLV increments set by XCintG and XcintV. I fixed this in the main program so that now all the data that is used in the forecast mode comes from times that are less than the Carrington rotation of the Earth at the time given input as the forecast time. The current mk_ipsd2020NN uses the IPSDTDV2020NN.f main program. On 01/30/01 I copied all the programs of the mk_ipsd2020NN compilation over to the for/IPSd2020NN subdirectory so that this program and subroutine is now complete and seperate from the other fortran programs. On 01/30/01 I also began to alter the FIXMODELTDN.for and mkvmodeltd.for subroutines so that they better reproduce in-situ velocities. I have done this by copying all the files to the for/IPSd2020 subdirectory so that this program and subroutine is now complete and seperate from the other fortran programs. I renamed the IPSDTDV2020NN.f program to IPSD2020.for When I ran the program for 1965 in this subdirectory, the results for 16., 16., 0.25, 0.25 and 0.40 for the EA_FILE idl run_compareaceLp run was 0.771 and 0.066 and this is slightly different from before which was 0.688 and 0.058. I don't know why there is this slight difference. On 01/30/01 I began a lot of changes in mkvmodeltd.for in order to get better velocities into the matrix. I think the original (or perhaps the model 01/31/01B) is the correct one for weighting, but I tried a lot of things to see if there could be an improvement in the in-situ comparison. There was not a whole lot of difference when using the things I tried, below. C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! Original C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I)*VSN ! Added B. Jackson 01/30/01 C VWT = VWT + VWTij(J,I) C VPERP = VPERP + VWTij(J,I)*VELO C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I)*VSN ! Old run long ago. C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01A C VWT = VWT + VWTij(J,I)*VSN C VWTi = VWTi + VWTij(J,I)*VSN C VWTij(J,I) = VWTij(J,I)*VSN C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01B Seemed ~best so far C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I)*VSN*VELO C VWTij(J,I) = VWTij(J,I)*VSN*VELO C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 02/01/01A C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I)/VSN C VWTij(J,I) = VWTij(J,I)/VSN C VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 02/01/01B (Original) C VWT = VWT + VWTij(J,I) C VWTi = VWTi + VWTij(J,I) C VWTij(J,I) = VWTij(J,I) C VPERP = VPERP + SQRT(VWTij(J,I))*VSN*VELO ! 02/01/01C C VWT = VWT + SQRT(VWTij(J,I)) C VWTi = VWTi + SQRT(VWTij(J,I))*VSN*VELO C VWTij(J,I) = SQRT(VWTij(J,I))*VSN*VELO VPERP = VPERP + VWTij(J,I)*VSN*VELO ! 01/31/01B Seemed ~best so far VWT = VWT + VWTij(J,I) VWTi = VWTi + VWTij(J,I)*VSN*VELO VWTij(J,I) = VWTij(J,I)*VSN*VELO C VPERP = VPERP + (VWTij(J,I)**2)*VSN*VELO ! 02/02/01A C VWT = VWT + (VWTij(J,I)**2) C VWTi = VWTi + (VWTij(J,I)**2)*VSN*VELO C VWTij(J,I) = (VWTij(J,I)**2)*VSN*VELO VW = VWTij(J,I)*VSN*VELO ! 01/31/01B rewritten VPERP = VPERP + VW VWT = VWT + VWTij(J,I) VWTi = VWTi + VW VWTij(J,I) = VW Thus, I will settle on the version above. All other versions of the program should incoporate this weighting which essentially places all the line of sight variations into the weight. The nominal 16., 16., .65, .65, .25, .25, .4, run of 1965 gives 0.647546 and 0.229140 for the density and velocity correlations for the restricted data set and ACE in-situ measurements around the time of the July 14 CME peak. Other combinations of parameters give higher correlations, but none give the same density values in-situ/model with 16. and .65 as do these. The run of velocity deconvolution alone (2/12/01) does not allow the parameters to be set. This is now fixed in the main program (2/12/01). The version of the program that deconvolves velocity alone (both constant density and that uses mv^2 = constant) bombs in gridsphere with bad VMAP values before any iterations are gotten to. I have now fixed this and also fixed the problem when G-level alone is selected. The problem was in the setup for each initial velocity or density array. On 2/14/01 the velocity mv^2 works. On 2/14/01 the density mv^2 does NOT work to give velocity. Thus, I had better fix the Mk_D2V subroutine. On 2/15/01 I re-did a lot of the program to accomodate the modes that use a constant velocity and density and the mv^2 assumptions plus a write-out of the DMAPHOLE and VMAPHOLE data. These work now and have been checked using the routine to converge on both density and velocity, on velocity using constant density and on velocity using mv^2. The latest runs give the nominal using 16., 16., .65, .65, .25, .25, .4, for 1965 of 0.666897 and 0.417346. I think the "better" correlations may happen because fillwholeT is no longer double, but I am not sure of this. The correlations look considerably different now, too. Thus, to check I redefined using fillwholeT as before, and when I did this the nominal for 1965 looked the same as it has in the past in the iterative phase at least up to iteration 14 or so where the two began to diverge ever so slightly in the percentages, only. The correlations were: 0.647546 and 0.229140 which were identical to before. I thus returned the fillwholeT routines so that the times are no longer double-smoothed and I began a search for more "nominal" parameters. I also got a version of MK_DTV.FOR from the CASS01 fortran files and I renamed it MK_DTVN.for and compiled this version in the IPSd2020 subdirectory. The new version now works for mv^2 = constant for density iterations and gives approximately correct velocities. The new version also works for mv^2 = constant for velocity iterations and gives approximately correct densities. Oh, yes, another innovation is that the mv^2 = constant is now determined for each iteration with no convergence problems as far as I can tell. I now also determine the equatorial density and velocity (subroutines Deneq3.for and Veleq3.for so that the mv^2 = constant is now correctly applied to the other nominal value. In other words, the average speed at the solar equator is used to determine the mv^2 constant to produce density when speed is modeled. Likewise, the average density at the solar equator is used to determine the mv^2 constant when density is modeled. The nominal 16., 16., .65, .65, .25, .25, .4, for 1965 gives 0.666897 and 0.417346. Velocity convergence: Constant density and the nominal for 1965 above gives -0.123838 and 0.0811669 (The density above looks in the correct range, but it just is not constant. This is because the Mkshiftd rountine allows non-constant interactions to build up at height as long as velocities are non-constant.) Also, mv^2 density and the nominal for 1965 gives: -0.0242271 and 0.115706. End eq. vel = 334.436707 Density convergence: Constant velocity and the nominal for 1965 above gives 0.347806 and NaN Also, mv^2 velocity and the nominal for 1965 above gives 0.347806 and 0.318303. End eq. den = ? (Something goes wrong when this last is done in that the t3d file doesn't seem to write out OK. Also, it isn't clear why the correlation with velocity = constant and velocity with density given by mv^2 is does not give a different density correlation. Seems to me that it should.) On 2/16/01 I think I found the error at least in the last problem. The DMAP is not being updated each time the mv^2 is done. DMAP is consistently being held constant. Also VMAP is the same. Latest Revisions for the ipsdtestR program On 03/04/01 I ran the latest test program with spatial resolutions three time normal The test rotation was 1884. The run lasted about 1 day. Other things were running. On 03/08/01 the run ended where times and spatial resolutions were set to 2 times normal. The run lasted a day and a half for rotation 1884. On 03/07/01 I also ran a few tests with the program in 10 degree mode, and I updated the parameter list to make the changed resolution automatic with only a few changes in the parameter input. I also fixed the program so that it can (I hope) run with two different temporal resolutions - one for density and the other for velocity. In addition these new spatial resolutions now allow different line-of sight resolutions commensurate with the new spatial resolutions for either velocity or G-level. I will try this version of ipsdtestR today, 03/08/01. Later - The program runs, but the density fixmodel routine returns a "nan", as do all the other outputs of that subroutine. Somehow, the problem of a "nan" went away in the later compiled versions of the program, and so now on 03/09/01 I have a "working - so far" program using these new resolutions. The 10 deg. version in both V and g-level curently takes: g-level sources: 3851 (CR 1884) V sources: 703 memory 1x resolution in S & T V: 57,388 1x resolution in S & T g: 1x resolution in T V: 360,488 2x resolution in S V: 2x resolution in S & T g: g-level sources: 9265 (CR 1858) V sources: 652 1x resolution in S & T V: 69,756 1x resolution in S & T g: 1x resolution in T V: 388,396 2x resolution in S V: 2x resolution in S & T g: These tests imply that 3 degrees resolution with 6 Megs of sources (200 x 30,000) held in memory will take: 1.2 Gigs of ram per the program 55.6 Gigs of ram for the sources ~150 hr for 0.01 Megs of sources with a 850 MegHz machine At ONE degree resolution with 18 megs of sources (600 x 30,000) held in memory: 11.0 Gigs of ram per the program 678.0 Gigs of ram for the sources ~4000 hr for 0.01 Megs of sources with a 850 MegHz machine On 3/13/01 I began timing experiments. More experiments will continue when the new 1.2 GigHtz and 1.5 Gigs ram machine begins working. On 3/14/01 I wrote a subroutine writeM_Scomp.for that prints out files used to check sources if the user asks. For Cambridge data where no source names are available this program uses the sky location declination and the sky distance from the sun to give a unique source identifier for each 5-degree sky location. On 3/14/01 I also wrote a subroutine iwriteproxymapN.for that is the compliment of ireadproxymapN.for This subroutine writes out maps of the final source surface data to be used in the program as input on the first run of the program in order to help iteration convergence. I also modified ireadproxymapN.for so that now fillmap.for is no longer needed. I also removed a good many other unneeded lines of code that have previousluy been unnecessary. On 3/15/01 I modified the main program so that it no longer needs (but can if wanted) use two shifts on each iteration. The modification uses a shift only after the velocity is determined (a density-based shift) to set up for the density iteration. To do this, the velocity times and settings need to be the same as the density ones, and they are set this way if the user does not remember to set them this way. On 3/15/01, I checked the ipsdtestR.for program with the old version of the ipsd2020 program in the subdirectory IPSd2020 and found that they gave identical answers. I then renamed the IPSDtestR.for program to IPSTD.for and removed the IPSDtest.for program from the IPSdtest directory. On 3/15/01 I found that the two programs did not give identical answers and so I found the error in that the variables in the new program aNdayV and aNdayG were not typed real in the two subroutines MKGMAPTDN.for and mkvmaptdN.for and after these two problems were fixed (in the IPSHTD and IPSd1010 directories as well) the two versions gave identical outputs up until the t3dwrite for CR 1965 in both the new and old version of the program. The answers are still slightly different. The correlation for the limited time series is now 0.850 rather than 0.840. I do not understand this difference, but it is so slight that I plan to ignore it. On 3/19/01 I moved this program over to CASS183. This version of ipsdtestR.for has better inputs and outputs for files. I modified this program and mk_ipsdtestR so that it is now called simply ipsdtest on CASS183. There are now three versions of the ipsdtest program in this subdirectory. They are IPSDtest.for IPSD1010.for and ipsd2020.for and their associated mk_ipsd files, and these should be used to change and test new routines and ideas. On 4/28/01 I made this subdirectory into a test directory where it is possible to modify different versions of differnt programs and then place them back into their appropriate subdirectories. On 4/28/01 I did this with the ipsd2020 subdirectory - copied the whole contents of that directory to this one to see if it is possible to determine how to stop the velocity spikes in the time-dependent tomography matrix. (At least I think this is the problem). On 4/28/01 the current version of the ipsd2020.for program with inputs set at 12.8 0.47 0.32 and 0.36 was supposed to give a 0.86 g-level correlaton. It gives only a 0.580 correlation over the restricted range of observations, V gives 0.659 over the restricted range, but is spikey. On 7/30/01 I ran ipsd2020 to see what the parameters 12.8 0.47 0.32 and 0.36 would give using the limited range of observations on CR 1965. Otherwise, default values of the program were used. The g-values give a 0.131 correlation over the restricted range of observations, V gives 0.519 over the restricted range. These are nothing like the old parameters, so I do not know what the problem is here. I then ran using the parameters 13 .80 .15 and .25 and the values for the restricted CR 1965 range were: 0.803 and 0.423. These are the same as the old values gotten on 5/4/01, and so I think I now know what happened. The run on that date says that there was some gridsphere and timesmooth changes in MKshiftd.for. The run of the ipsd2020 program on subdirectory IPSd2020 was the same on 7/30/01 as on the IPSdtest subdirectory. On 7/30/01 I found an error in the Timesmooth routine used in MKshiftd.for. The problem was the scratch array that was used. The array was dtmp, and this array actually gets loaded with densities temporarily. I do not know if the array was overwritten much in Timesmooth, but when I changed the scratch array to VZtmp, a true scratch array, The correlations using the parameters 13 .80 .15 and .25 and the values for the restricted CR 1965 range were: 0.840 and 0.413, so that some of the scratch array must have been used. On 7/31/01 I found an even more grevious error in that the constant for Timesmooth was CLRV. This constant is in degrees, and was set to 13 (see parameter above) for the run above. This smoothing in time places virtually all changes in the velocity 3 D matrix into the spatial coordinate. This might surely tend to make the 3-D spatial matrix "spikey", as observed. The correlations using parameters 13 .80 .15 and .25, and the values for the restricted CR 1965 range were: 0.648 and 0.395. The spike at 7/27 is still present in the time series, however, and the variations in the Velocity time series have not changed much, either. On 7/31/01 I placed the temporal and spatial filtering of the MKshiftd.for velocity array at double the filtering of the velocity array in the main program. The hope is that whatever velocity changes are recorded, that they will come and be fortified through the velocity surface map and not the through the velocity and density differences with height.