In another attempt to see why my meta-analysis of the CSIRO drift data shows non-nil values near Java and does not show a prominent peak at 34S like yours, I tried utilizing the product of 19 debris find site results for each day and latitude. I got all zeros.

Your Appendix A.5 shows your method of overcoming this is to have a threshold of one (or start counting at one?) in the accumulation bins where the average number of drift hits is around 6. Elsewhere you say that bins with less than two hits are dropped.

I’m using all the data, weighted by distance and time with ranges similar to yours. Weights do get masked to zero outside of the linear triangular windows. It’s not surprising that one of those 19 categories will contain a zero on any given day, and multiplying them all together gives zero.

There is another way to take the product of a large series. They can be summed in log space.

log(x*y*z) = log(x)+log(y)+log(z)

A small epsilon must get added because log(0) = -inf.

To account for the different number of tracks at each latitude bin, my histograms are already using the arithmetic mean rather than just summing. I can now take the geometric mean instead which contains the product of the variables:

geomean = exp(sum(log(dataset+epsilon))/n)-epsilon

(I use the scipy.stats.mstats.gmean function.)

Here is a plot using this method to get the joint probability of the 19 sites, and also the two windage sets:

https://drive.google.com/file/d/1EGU26hvTEdQ_roN9FyRM69SJX-c9MuDX

As you can see, the shape of the resulting curves are again unchanged. There is no major peak at 34S, and values near Java are not nil. If I expand the reporting window to allow much earlier arrivals, the values N of 23S are more intense.

To test if the method is robust, I tried also using the geometric mean when summing tracks by latitude. The weakest latitudes have more zero values with taller spikes, but the shape is again the same.

DrB: “I am not aware of any credible reports of an earlier flaperon arrival. The beach cleaning crew would have removed such a debris when it was first found, so they did not observe it at an earlier date.”

I already addressed this point. It’s a very narrow assumption that the debris being stuck in a region would be washing up again on the same resort beach. Victor and I were tagged Nov 30 on a news report of a prior flaperon sighting at Reunion May 10, 2014:

https://x.com/elizanow1/status/1730213082469908990

You may be disputing the credibility of a local news report, but your narrow assumptions may not apply. It is quite possible that the flaperon was in the area for months or on remote beaches before it hit a popular beach and was recognized as part of MH370.

DrB points 4&5:

You say that the CSIRO plots were not a joint PDF, presumably because only the flaperon had been found at the time. Then you say they ignored half the 7th arc because, “… That is, the area of the Joint PDF curve north of 26S is essentially zero.”

That seems like either a contradiction or more assumptions based on your own results, which I have questions about. The CSIRO was initially tasked with determining whether the active search area was compatible with drift. That was their focus.

I think I’ve shown that my method of weighting the histograms by time and distance is robust to even large changes in the parameters, and it has enough SNR to show the model uncertainty within each latitude division by using finer resolution.

I’ve accommodated all of your constraints but there is still a big mismatch with your result. I could share my python code for others to run the experiment if you still dispute my result. Is your spreadsheet available?

]]>Sorry for the delay in replying; I have been traveling and fully occupied for a while. The file you shared on Nov 28th is really helpful. I guess one of the key questions is “ If the errors are statistically independent amongst all the observables”, in particular with respect to BFO data. I currently assume that the BFO frequency bias may especially change after a cold restart of the OCXO and that drift is limited otherwise. The formal approach through Bayes Theorem is really interesting. ]]>

Needless to say, I have put a reasonable effort into looking for industry references to the use of lightning strike protection layers in composite materials, in particular those used in the B777. Everything I have found tells the same story; the LSP is, of necessity, located just below the surface layer of the composite. It is never embedded in the resin layer below the core.

As I’ve noted previously, a five second thought experiment is all that is needed to eliminate that positioning the LSP such that the core material is positioned between it and the surface layer.

I can say with at least 90 percent confidence that Broken O is not only **not** from a B777, it is not even an aviation component.

I am at a loss to understand why there is this insistence in trying to demonstrate that these items are something that they manifestly are not? The endeavour sits somewhere between obsessively unhinged and consciously fraudulent.

]]>Further bogus and false contrivances are not necessary.

It is already evident from the images posted of ‘Tataly-Antsiraka’ and ‘Broken-O’ that the substrate claimed to be a LPS lies at a depth within the panel where lightning strike damage would be maximised, not minimised (regardless of the manufacturer of the substrate).

]]>Thanks for the link for ‘Strikegrid’. They say they introduced this on the 777 from 2004 and onwards. Please help me in my quest to find composite designs in earlier 777s. This will prove one way or the other if Tataly and Broken-O are from a 777.

Perhaps, near the trailing edges, where the current is being taken towards the static discharge wicks it is acceptable to embed the mesh.

And Yes, it is me on p.17 who has been trying to find possible locations for these pieces

]]>Tim, here’s an industry document from The Gill Corporation specifically for the B777 – https://www.dropbox.com/scl/fi/jm8jlt17ye76qy29lfbjy/TheDoorway-Fall2022-1.pdf?rlkey=91kzjdgse7xgyr17inx5ji2g8&dl=0

Note, the diagram on p.9; LSP (in this case, Strikegrid) immediately under the surface material and outboard of the core material.

And when you say, “nothing conclusive”, you actually mean “nothing at all” don’t you?

You only need a 5 second thought experiment to eliminate the possibility of the LSP having the requisite effect if it were embedded in the resin layer, regardless of where the component was located.

Just from the avoidance of doubt, are you the “*aviation professional familiar with the B777*” referenced on p.17 of the MH370 Boeing 777-200ER Lightning Strike Protection document?

Yes, the Tataly piece will not fit between the fastener lines on the flaps. So I’m still looking to find another possible location.

@Mick,

Thanks for that Boeing report. So for the B787 the LPS is definitely in the outer layers. I’m still considering the possibility that in older designs, and especially near the trailing edges the LPS might be embedded. But, so far have found nothing conclusive.

G’day Tim,

I’ve not been able to find any reference material that shows anything other than the most logical arrangement of the layers; the external surface layer, the lightning strike protection layer, an isolation/insulation layer, the core material layer.

There are a good many illustrations of that layering sequence readily available to anyone who looks (*eg* the Boeing graphic included in this industry post

https://www.comsol.com/blogs/protecting-aircraft-composites-from-lightning-strike-damage/)

Doubtless, the authors have elected not to include, amongst their twenty various diagrams and photographs, a diagram showing the location of the LSP in aviation composites for a reason, just not a particularly “good” reason with respect to honesty or veracity.

Beyond what research might indicate, just think about the purpose of the LSP in composite materials in aeronautical applications. To quote from a NASA paper on the topic,

“*Without proper lightning strike protection, the carbon fiber/epoxy composites can be significantly damaged, particularly at the entry and exit points of the strike. Approaches have been developed to protect the composite structures from lightning direct effects to reduce damage to acceptable levels by using conductive foils or meshes in the outer layer of the composite system.*” (Electrical Characterizations of Lightning Strike Protection Techniques for Composite Materials, Szatkowski

The LSP cannot fulfil its purpose of keeping the current/heat generated by a lightning strike **away from the core** if it is positioned such that the electrical current has to pass from the surface layer **through the core** to reach the LSP. It is patently ridiculous to suggest otherwise.

A five minute conversation with anyone involved in composite construction techniques will quickly reveal what the mesh layer on the bag-side (furtherest from the exterior surface layer) of the composite is – it is the resin/media flow mesh.

One can only wonder why the two authors are going to such great lengths to fraudulently misrepresent these pieces as having come from 9M-MRO.

]]>Please can you confirm that in composite construction, the lightning protection layer is always on the surface layer. Is it possible that it is embedded in certain areas of the aircraft ?

Thanks

]]>Thanks for that link, Victor. Apparently for some of those pushing the faked video, the use of stock visual effects for the “portal flash” wasn’t enough evidence. Hopefully this will put the matter to bed. Brandolini’s Law in action.

With a bit of luck someone will put something together addressing similarly fraudulent claims, such as aeronautical composites having the lightning strike protection layer being separated from the surface material by the honeycomb core, or the number of satellite data units installed on 9M-MRO.

]]>1. You said: “Taking a multivariate joint PDF makes that even more difficult, especially with 19 (or 86K) products. If I understand sk999’s method on the SATCOM analysis, he’s taking the eigenvectors or SVD of the covariance matrices to force each pair to be orthogonally uncorrelated before taking the product. I don’t see any such treatment in your debris drift PDF method.”

You are conflating two different problems, which require different statistical treatments. It’s nonsensical to say that the calculation of the joint probability of a given crash latitude matching all the MH370 debris reports should use the same equations/methods as a figure of merit used in fitting one set of SIO route parameters to best match the statistical behavior of the BTO and BFO residuals.

2. You also said: “I think that’s still not long enough to be compatible with the even the flaperon, which was reported there “months earlier”.”

I am not aware of any credible reports of an earlier flaperon arrival. The beach cleaning crew would have removed such a debris when it was first found, so they did not observe it at an earlier date.

3. You also said: “I am not a statistician, but my understanding is that joint probability products are not common except for understanding how distributions relate to each other, and that those datasets should be independent, not conditional.”

If you became familiar with the proper methods for the statistical analysis you are attempting, you would appreciate your error. The MH370 problem is like the coin-flip example. Suppose I ask what the probability is that I will flip heads 10 times in a row. That probability is (1/2)^10 = 0.001, not 10 x ½ = 5 or even AVERAGE (1/2, 1/2, 1/2, 1/2, 1/2, 1/2, 1/2, 1/2, 1/2, 1/2 ) = 1/2. When every trial must have the same outcome (either “heads” in the coin flip example, or a drift trial matching a MH370 debris report), you must use the product of the individual probabilities. Thus, you have to find the Nth product of each individual debris-report(or coin-flip) probability. In the MH370 case, I want to know the probability that MH370 debris landed at ALL the debris reporting sites (just like all the coin flips had to be “heads”). Therefore, I also have to find the product of all the individual probabilities for each debris report.

4. You also said: “My simple histogram summing method appears to be quite similar to the method used by David Griffin, who provided the datasets. Here’s an image on his site that’s from his paper “The Search for MH370 and Ocean Surface Drift”, p 18, fig 3.2.1 . . . It only covers latitudes 26-42S, but you can see by the same early peaks near 31S and the dropoff near the areas already searched that his probability histograms are being summed.”

You are misinterpreting those CSIRO plots. The three panels each show a single PDF (not a sum or a product) which is the probability that debris are predicted to land within certain geographical zones. CSIRO is not combining multiple PDFs in these plots. They simply show the number of trials predicted to be in each geographical zone at the selected time bin. If you wanted to know the probability that non-flaperon debris were predicted to land in Africa AND land in Reunion AND land in Western Australia, you would have to find the product of the three plots.

5. You also said: “I’ve dropped the PDF product from my plots because I think it’s the wrong approach for such uncertain data, resulting in multiplication by zero of useful results. It was only added to make that point.

You seem to be saying two things here: (a) you won’t exclude a crash latitude which has zero probability of matching one or more MH370 debris reports, and (b) that only non-zero probabilities are “useful” (implying that having a zero probability result for a given latitude is not useful and should be ignored).

Those statements defy common sense and illuminate your (admitted) lack of understanding of statistics. Uncertainty (noise) has nothing to do with selecting the proper analysis method. In addition, ALL predicted probabilities are useful, regardless of their numerical value. In the MH370 case, near-zero probabilities for any debris report are quite effective in eliminating portions of Arc 7. That is why CSIRO only plotted latitudes from 26S to 42S, because they had already concluded that the probability of a crash north of 26S had such a low probability that this portion of Arc 7 was inconsequential in making a crash latitude prediction. That is, the area of the Joint PDF curve north of 26S is essentially zero. By NOT excluding zero-probability latitudes, you are applying subjective bias in predicting the joint probability, and, as I have previously pointed out, your result is not an accurate measure of relative probability. It won’t reliably tell you correctly whether Latitude A is more likely to be true than Latitude B.

]]>Not all of the content is dynamic. Images and js can be cached. If a page hasn’t changed, (sometimes affected by mobile or desktop rendering), the whole thing can be treated as static. There are WP cacheing plugins that help. Here’s more info:

https://www.cloudflare.com/learning/cdn/cdn-for-wordpress

@DrB:

The very narrow time windows were an extreme test to show that there’s no need to make individual plot to discern the modeled arrivals at each destination. It wasn’t meant for histogram analysis, except to show that the shape of the my histogram curve was the same even with narrow time restrictions, but the result is down in the noise.

Here’s a time plot and histogram with time windows closer to yours. The details are in the title:

https://drive.google.com/file/d/1E0ovQqg7_3WBO8rik-RjJWyf5XZBo609/

Histogram:

https://drive.google.com/file/d/1E1secWQB3jKeU2K8N5VZD4OFBaHM3D1e/

We don’t know how accurate the drift modeling is. I set 20 days before the proximity is weighted at zero for modeled debris arriving after it was actually found at a site. We also can’t know the reporting delay added to the drift uncertainty. Debris is still being found. It has 50% weight at 100 days early in this plot, but I think that’s still not long enough to be compatible with the even the flaperon, which was reported there “months earlier”.

I’ve dropped the PDF product from my plots because I think it’s the wrong approach for such uncertain data, resulting in multiplication by zero of useful results. It was only added to make that point.

@DrB: “I thought we were analyzing the same CSIRO predictions and the same MH370 debris reports, so the “data” are not different.”

AFAIK we are using the same two CSIRO datasets. Here’s what got lost: “Again, because my candidate is based on new acoustic evidence, it is not dependent on being the highest peak on an optimized search probability curve from inexact data with narrow assumptions.”

I am not a statistician, but my understanding is that joint probability products are not common except for understanding how distributions relate to each other, and that those datasets should be independent, not conditional. Taking a multivariate joint PDF makes that even more difficult, especially with 19 (or 86K) products. If I understand sk999’s method on the SATCOM analysis, he’s taking the eigenvectors or SVD of the covariance matrices to force each pair to be orthogonally uncorrelated before taking the product. I don’t see any such treatment in your debris drift PDF method.

My simple histogram summing method appears to be quite similar to the method used by David Griffin, who provided the datasets. Here’s an image on his site that’s from his paper “The Search for MH370 and Ocean Surface Drift”, p 18, fig 3.2.1:

https://www.marine.csiro.au/~griffin/MH370/br15_pwent2d/pfromto_1_nonflap.gif

It only covers latitudes 26-42S, but you can see by the same early peaks near 31S and the dropoff near the areas already searched that his probability histograms are being summed.

I do use many more histogram bins along the latitude axis to show the many variations due to the butterfly effect.

]]>Thank your for your comments. I have always respected the work done by this group and did not intend my comments to be so absolute by using “never” in my language.

Clearly the “straight” “unpiloted” course to fuel exhaustion meets all the Inmarsat data has been clearly shown to be feasible. I have never had an issue with using this assumption because it is clearly viable and needed to be searched.

If the aircraft was programmed to fly directly toward the point where arc 7 crossed at 25S, for example, it would have had excess fuel and all of the other arc crossings would not match the data. So a direct route to that location would not be feasible because it would not fit the Inmarsat data.

But there are many ways to maneuver to burn fuel along the way to put the aircraft into the ocean with empty tanks using s-turns to decouple range and endurance. But out of the infinite routes with s-turns, it had to “just by coincidence” match the recorded Inmarsat data. This could not have been planned – it is just an outcome from a wandering flight out of the infinite maneuvering possibilities. This is an unplannable random route that may have actually been flown.

My question was – “could an active pilot have decided to plant the aircraft with zero fuel at a specific point in the SIO and flown a maneuvering path to get there?” And the actual flight that day would have just resulted in the same Inmarsat data as the straight profile.

Is there only one possible flight path that could have been actively flown that night that could have resulted in the exact Inmarsat data set?

You can’t preplan the flight path to fit the unknowable Inmarsat data, but it might be feasible to pick an endpoint and work backwards to create a feasible solution.

This is not a good outcome if this possible to do with reasonable maneuvering. Better to have a dead pilot.

]]>Thank you. Like you I believe we witnessed a deliberate action to the end. I make the “worst case assumption” (to me, realistic assumption) the pilot was savvy and the plan was to hide crash.

A savvy pilot had no idea about the Inmarsat rings, but does know the SATCOM is a possible vulnerability: SATCOM shows he is still flying if a call comes in, and at least he must have wondered if the sat calls gave away GPS info. For that reason, pilot turned off SATCOM (we call that Arc7) and continued flying, probably under the thick clouds with fuel. Prior to that he probably already (before Arc6) had descended to in the range FL150 (based on Arc6 to Arc7 distance and speed). Arc7 we see the BFO dip below to about 5000-ft to become invisible visually. This probably happens in the range 30-32s where these is still fuel. Your guess as good as mine as re: end point, but I am thinking deep, hard to search spot, as far as possible from Arc7.

I suggest the savvy pilot assumed we might figure out the southerly path, so his priorities is to save fuel for hidden flight path after Arc7. Given all that, I do not see a need to fly S-curve or stay in darkness (which is 38 South crowd’s key assumption.

I actually feel the savvy pilot scenario is what the data says and it is not so hard to figure out up to Arc7. After Arc7 is the problem: where and how far from Arc7 is possible in worst case scenario?

]]>Thank you. Like you I believe we witnessed a deliberate action to the end. I make the “worst case assumption” (to me, realistic assumption) the pilot was savvy and the plan was to hide crash.

A savvy pilot had no idea about the Inmarsat rings, but does know the SATCOM is a possible vulnerability: SATCOM shows he is still flying if a call comes in, and at least he must have wondered if the sat calls gave away GPS info. For that reason, pilot turned off SATCOM (we call that Arc7) and continued flying, probably under the thick clouds with fuel. Prior to that he probably already (before Arc6) had descended to in the range FL150 (based on Arc6 to Arc7 distance and speed). Arc7 we see the BFO dip below to about 5000-ft to become invisible visually. This probably happens in the range 30-32s where these is still fuel. Your guess as good as mine as re: end point, but I am thinking deep, hard to search spot, as far as possible from Arc7.

I suggest the savvy pilot assumed we might figure out the southerly path, so his priorities is to save fuel for hidden flight path after Arc7. Given all that, I do not see a need to fly S-curve or stay in darkness (which is 38 South crowd’s key assumption.

I actually feel the savvy pilot scenario is what the data says and it is not so hard to figure out up to Arc7. After Arc7 is the problem: where and how far from Arc7 is possible in worst case scenario?

]]>You said: “As an extreme test, I’ve shortened the window to just 16 days, which reveals all the arrival waves. It has little effect on the shape of the latitude histogram. It just gets spiky/noisy due to fewer samples.”

Your conclusion is incorrect for crash latitudes north of 23S (and especially at 8.3S). The likelihood values there were dramatically lowered when you applied a penalty for early arrival, compared to your previous plot, which did not. For example, at 8.3S your previous plot (with no penalty) showed relative likelihoods of 0.4 for both your “low windage” and “high windage” debris categories. Your normalized relative likelihood sum was 0.4 and your likelihood product was 0.16. Your most recent version of the same plots (with narrow time windows) shows zero for the “high windage” debris and about 0.02 for the “low windage” debris, with a sum of 0.02 and a product of zero (why didn’t you show the product plot?). How can you say that there is “little effect” when your proposed crash latitude changes likelihood from 0.4 in the sum to 0.02 (20X lower), and from 0.16 in the product to zero (i.e., from feasible to infeasible)? I see a huge effect north of 23S.

You don’t say how late arrivals are penalized in your final set of likelihood plots, but you allow arrivals which are two years early to contribute, and you penalize by only a factor of two for being one year early. This time window is too wide to provide useful latitude selectivity, especially for the flaperon. The resulting loss of time resolution smears your PDF in latitude space by overweighting crash latitudes which are not well aligned in space and time with the debris reports.

You said: “Boxcar filters usually broaden the result . . .” That is not true in general. It depends on the width of the boxcar. When the boxcar is narrower than the average width of a triangular window, it has higher resolution and it narrows (not broadens) the result.

You said: “My point about why a lesser result based on different data needn’t be the highest peak appears to have been lost.”

I thought we were analyzing the same CSIRO predictions and the same MH370 debris reports, so the “data” are not different. Did you mean different analysis methods? Or a different latitude? I certainly agree that the true crash latitude won’t always appear at the peak of a predicted PDF. If we had a large number of data sets, that would trend to be the case if we did the analysis correctly. Since we have just one crash site and one set of MH370 debris reports, it would be coincidental that the peak of the predicted debris drift probability happened to exactly match the true crash latitude. All latitudes which have significant relative probabilities (which are above the noise floor) are “feasible”.

Your analysis method differs from UI (2023) in three regards. First, you equally weight two debris categories (the flaperon and everything else). In UI all debris had equal weight. Second, you apply smaller penalties for significant mismatches in space and time between the MH370 debris reports and the CSIRO-predicted drift tracks. Doing that lowers the latitude selectivity. Third, you (generally) average the PDFs rather than multiplying them (as one should for conditional probabilities). That multiplication process increases the fractional noise, whereas averaging the PDFs reduces the fractional noise. However, when you average conditional PDFs you no longer have a quantity which is a true probability. As a result, comparisons of average values do not reflect their relative probabilities. You can’t even say that latitude X is more likely than latitude Y, much less by how much.

For example, suppose at latitude X we have two PDFs (for two debris classes) with values of 0.2 and 0.8. Their average is 0.5, but their product is 0.16. Next, at latitude Y we have 0.4 and 0.5 for the two debris classes. Their average is 0.45 and their product is 0.20. Which is more likely? Latitude X or latitude Y? Using the average, latitude X wins out (0.50 versus 0.45). Using the product, latitude Y is more likely (0.20 versus 0.16). So, averaging conditional probabilities does not even guarantee that you can tell which of two latitudes is more likely.

In order to assure that you can determine the relative probabilities, you must use the product when the probabilities are conditional (as these are). You want to know the probability of a particular crash latitude producing drift tracks matching MH370 debris report #1 in space and time AND that latitude also matching report #2 (and so on). That “AND” is what drives the product. We want to know the probability of matching report #2 GIVEN the fact that the same latitude also matches report #1. Thus, the probability of a latitude matching multiple debris reports is P(A & B) = P(A) x P(B). If you don’t use the product, you don’t have a relative probability curve. The inevitable price you pay in knowing the relative probability is higher noise. The product has higher fractional noise than all the components. Trust me, I don’t like that “penalty” of higher noise for doing the computation correctly, but we have to live with it. The benefits are much greater latitude selectivity and the ability to accurately determine relative probabilities of different latitudes.

]]>Cloudflare is a free and easy to manage content delivery network. Their free account can cache your website content and deliver it fast globally. All it takes is transferring your DNS to them, which is also free. After a quick setup, your NetSol site will be hidden from hack attacks, and Cloudflare specializes in blocking them through their service. Just take your current DNS settings for mail and such, and copy them over. A bonus is that CF also handles the SSL layer and certificate. Your Netsol site then only needs to deliver http to CF. If you want to get fancy there’s a WordPress plugin to further optimize the CF cache.

@DrB:

To answer your disputes of my meta-analysis, I’ve refined the distance weighting to now be a linear taper of a set window size to zero. No more fog. I’ve done the same for the late and early=reporting penalties. As an extreme test, I’ve shortened the window to just 16 days, which reveals all the arrival waves. It has little effect on the shape of the latitude histogram. It just gets spiky/noisy due to fewer samples. Time plot:

https://drive.google.com/file/d/1DaFM5crmAsEYUV-kzdBX_HHnI_-kUAPq

Histogram:

https://drive.google.com/file/d/1DeDi3D3pm8Ikg-H6CTdZdRVxpEB5IzLp

All the weights are averaged per latitude bin among the discovery sites in the time plot, and between the two datasets.

I don’t agree that the reporting window should exclude early arrivals, but here is a plot that attenuates data arriving a year before it was found by half, and two years is nil. Considering that the majority of the discovery times were in the range of 800-900 days after the 7th Arc, this is a reasonable value:

https://drive.google.com/file/d/1Dgfs7mI712AOBAckrWE-9cVN8yL1XvaO/view

https://drive.google.com/file/d/1Dg9s9Fh0y-p_-5uH6tHIFUWdR_S9pNsm/view?usp=sharing

It’s quite clear here that the main features of my previous histograms are unchanged. The method I’m using is as simple as it gets, and obviously robust to variations in weighting.

I suspect the reason that it doesn’t match yours is that you are multiplying all the fractional site sums together, getting a very noisy result where all discovery sites must agree with significant matches, then smoothing by lumping the far fewer hits in wide 1 degree bins. The other major difference is that I’m using a tapered linear weighting on the time/distance proximities, but you’re using a rectangular window. Boxcar filters usually broaden the result, where any strong hit has full weighting for the duration of the window. Triangular/hanning/gaussian/etc windows give finer results.

I wish you’d stop claiming that my results have a bias or a smear. It implies that I’m not following best scientific methods to avoid a very different sort of bias. As you are now focusing on my spelling errors, I don’t expect you’re going to come around to see a simpler approach as valid, or as a validation test of the meta-analysis.

My point about why a lesser result based on different data needn’t be the highest peak appears to have been lost. Suppose that your candidate site doesn’t sum to the highest probability along the 7th Arc (which is what my plots show). Not the highest, but not nil. If it turns out that your site is correct in the end, it would be clear that it really didn’t matter that there were more optimal drift paths at hypothetical crash sites. The only requirement is that it’s feasible – that a reasonable study doesn’t rule it out entirely. If other studies like mine ruled yours out entirely (which I don’t), then they would have been wrong.

]]>Nobody can “know” the MH370 pilot’s motivations. However, there is still a slim chance we can know his actions, and this might illuminate his thinking, although faintly.

You are going out on a limb when you say that a pilot would “NEVER” do such and such.

I won’t go nearly that far. I will say that it is highly unlikely the pilot know about the Inmarsat data being archived. Nobody at MAS knew this, and not everybody at Inmarsat knew it. If the pilot knew it, he could have prevented it by not repowering the IFE and thus shutting down the SDU. The fact that he left power on the IFE implies the pilot did not think this allowed aircraft tracking. Then, if the pilot “knows” he can’t be tracked, and especially if even the starting point of the SIO route is highly likely to be unknown to searchers, then his crash location is unknowable, no matter what LNAV mode he used or whether he made additional turns en route. So, he may very well have concluded that his flight could not be tracked, and how he flew into the SIO was immaterial. He was almost right about that. He probably thought about floating debris eventually washing up on distant shores, but that could not be avoided, and that would be unlikely to produce, by itself, a pinpoint crash location.

What we now know is that, if LNAV were flown following a geodesic to a single distant waypoint, that could not have been further south along Arc 7 than circa 35S, and even to get that far requires two fuel-saving measures. One is a reduced speed and therefore a reduced fuel flow from roughly 18:28 to 19:41. The second is reduced fuel flow (but not reduced speed) from 19:41 to MEFE at 00:17 with the air packs off.

We don’t yet know if this happened. The alternative explanation is a synthesized route with a reduced average speed (which removes the necessity of having the air packs off) and multiple speed/bearing changes designed to match the BTOs and BFOs. Numerous routes, which are far from unique, can match both the BTOs/BFOs and MEFE. However, they all end up on Arc 7 well north of 32S where the debris drift probability is generally quite low. Some of the Arc north of 32S is also eliminated by the aerial search non-detections of a floating debris field.

A third possibility is a route similar to UGIB but with very minor bearing or speed changes. I don’t give this possibility much credence, because I would expect a purposeful “evasion maneuver” to involve noticeable turns, and when you include this you must end up north of about 32S, where the debris drift probability is generally low. There is a small chance for a multi-turn route ending near 27S with a relative drift probability of about 1/3 and a relative aerial search probability of about 1/10 of 34S values. So, there is a few percent chance of a crash circa 27S.

You said: “It just seemed off to me that a pilot would not actively work to hide the aircraft and would just take a great circle route to oblivion.”

My guess is the pilot reached his “oblivion” circa 19:41, less than 2 hours after murdering the passengers and other flight crew members. Even if that suicide happened circa 19:41, that would not prove no turns occurred later. They could have been programmed into the FMS beforehand. The inclusion or exclusion of turns in the SIO Route does not depend on having a functioning pilot after 19:41, but a post-MEFE manual glide does. I don’t think this glide occurred because (a) we have no indication of a water ditching based on analyses of recovered debris and (b) the satellite data are fully consistent with an unpiloted crash after MEFE. The final issue is why didn’t the previous searches locate the debris field? The answer to that may be for multiple reasons. As Victor has suggested, the debris on the sea bottom may consist of smaller than normal pieces, and they may lie in difficult terrain that has not yet been thoroughly investigated.

]]>My concern is that a pilot that wants to hide an airplane would NEVER fly a great circle path to fuel exhaustion. This greatly reduces the possible end point from infinite to half infinite.

I would use S-turns or holds along the way to decouple range and endurance.

But with the pilot not knowing about the Inmarsat pings, the maneuvers had to have been a lucky set but it is possible that fuel could have been consumed along a course to a crossing of arc 7 at 25S, for example.

I do not think a pilot could have preplanned a maneuvering route to put the aircraft in the water and plan to meet the exact Inmarsat data, but a pilot could have planned to exhaust fuel at a specific location and maneuver to decouple range and endurance and it just happened to result in the data set.

I do not have any reason to question all of the simulation and analyses that have been done by this group. It just seemed off to me that a pilot would not actively work to hide the aircraft and would just take a great circle route to oblivion.

]]>It may be that I have not optimized the site, but I am not doing anything complicated that would account for it being so slow.

If this was a commercial site that I was using to conduct business, I would have left long ago. However, considering the niche nature of this site, and the time and cost to migrate it to another provider, for the time being, I am putting up with the horrible service, and strongly advising others to go elsewhere.

]]>From My Modern Safari mobile browser each refresh of your website or click on any link within the website takes approx 11 Seconds to display. Occasionally longer to display making my device inactive and activating the 30 second auto lock I have on. Even when I’m close to my WIFI router. No problems with any other website, or apps. Your actual website resources seem very basic in terms of display and usage. Seems like a Network Solutions traffic flow issue perhaps.

]]>First you said: “Your fig 10.1-1 of course shows 34S arrivals, because you have selected the data that way, excluding any tracks that don’t reach all the discovery sites twice.”

Then you said: “Perhaps I’m losing track.”

I think so. Figure 10.1-1 showed (as explained in the introductory text) just one track per debris site, to demonstrate non-zero probabilities for all MH370 debris reports from 34.2S.

Appendix A.5 in UI (2023) says: “To assure the statistical noise in a PDF (computed using one of the probability equations listed above) is not excessively high, for the non-MH370 validation test cases we applied two conditions using Method I over a 3-degree wide region of interest (ROI) which is centered at a predicted POI latitude:

a) the minimum number of trials simultaneously in both the distance and time windows is at least 2 per latitude bin, and

b) the average number is at least 5.”

This lower limit of two trials assures the noise in the Method I optimization route is not excessively high. That’s a different kettle of fish than Figure 10.1-1, but it also proves the fact that drift tracks from 3.4S provide significant probability for all debris reports, because these noise-related conditions were satisfied for 34S and all nearby latitudes.

You said: “If we are comparing specific candidate sites, I’ll note that on my histogram result the peak at 8.32S is 50% higher than the range around 33-34S.”

Your plot has significant deficiencies which contaminate and bias your histogram. Most importantly, it fails to fully consider the arrival times at various sites. As UI pointed out, time is the dominant discriminator for latitudes south of 30S. So, your histogram plot, which, as far as I can tell, only penalizes sort-of-late arrivals and does not reject any early arrivals, is strongly biased toward nearby crash sites which have predicted arrivals which are generally too early. Your histogram is therefore not useful in comparing the northern end of Arc 7 with the south end of Arc 7, which is the comparison you are trying to make, because you don’t remove those arrivals which are too early (and which, for the flaperon, is all of Arc 7 north of about 27S).

You said: “Again, because my candidate is based on new acoustic evidence, it is not dependent on being the highest peak on an optimized search probability curve from inexact data with narrow assumptions. It only needs to be plausible (sic). I believe it is.”

I don’t agree. It is implausible because it has extremely low probability of matching the MH370 finding locations and dates. You will see this (as CSIRO, UI, and others have already published) if you properly eliminate both late and early arrivals. In particular, 8S is strongly excluded by the Flaperon discovery at Reunion. So far, the only explanation is that the flaperon was somehow stopped by another intervening island for a year before magically continuing on to reach Reunion on day 508. I don’t think that is sufficiently likely to have occurred to make 8S a plausible MH370 crash site. It’s also a highly unlikely crash location for the other sixteen non-flaperon debris.

]]>You mentioned that my first graphical plot was biased toward slower debris. That wasn’t true for the intensity of the proximities, but I believe it is true for the now summed histogram. Fast tracks where debris went straight to discovery sites and beached are prominent in proximity but low valued blips on the histogram. Slow particles S of 23S end up drifting around near the debris sites after the discovery, which I agree is time sensitive and shouldn’t get positive weighting. Debris can’t be discovered before it arrives.

I applied a 60 day linear taper after debris had passed a discovery site date, to allow for uncertainty in the model. It mostly changes proximities below 23S, putting dark bands on the right side of the plot after 850 days. This only on drops those histograms by a few percent.

I also took your suggestion, narrowed the distance uncertainty to 50 km, and made the summary linear. There is a higher fog factor from distant debris mostly visible on the left, and it is now more “smeared” with fewer proximity peaks:

https://drive.google.com/file/d/1DVtjQBuPZaX3NM0TZDUJfb6nr6ud5808

The histogram is flatter with lesser peaks:

https://drive.google.com/file/d/1DUyQQroQt_NB-GS7D2VmwmVva8f7tuda

DrB: “I don’t know where you got the notion that figure shows tracks which reach the discovery sites twice.”

I’ve read all your papers, and I know you’ve put over a thousand hours into the drift studies. As I said, the data selection seems overly complicated, and it has changed over time. Perhaps I’m losing track. In your 2023 Appendix A.5 (referenced in A.7a) you say:

“a) the minimum number of trials simultaneously in both the distance and time windows is at least 2 per latitude bin”

Your 2020 coauthor wrote a 2021 drift paper that appears to detail similar methods, results, and shared content. He notes in section 7:

“”

(a) The number of trials arriving at the debris location within the distance window must be ≥ 100.

(b) The number of trials arriving within both the distance and the time window must be ≥ 20.

(c) It must be possible to identify a single mode across all crash latitudes.

(d) There must be a crash latitude bin count ≥ 4 trials in the resulting crash latitude bin.

“”

(distance window 30km for exclusion)

I may be confusing your previous methods with current methods I, II, and III.

DrB: “The four “likelihood” plots you showed in your second link show values at 8.32S which are much LOWER than likelihoods at latitudes SOUTH of 23S. On that point we can agree.”

If we are comparing specific candidate sites, I’ll note that on my histogram result the peak at 8.32S is 50% higher than the range around 33-34S.

My approach to a metadata analysis may not be as mathematically rigorous, but it is simple, uses common image and signal processing methods, makes use of the entire dataset, and does not exclude other candidates.

It is not designed to be selective for any outcome.

I try to follow the scientific method, in this case running an experiment to see if it invalidates my acoustic hypothesis. You’ve been claiming that only your site has a search probability match, and that your analysis makes my candidate and others impossible. I still don’t see that as valid, and I’ve pointed out why.

Again, because my candidate is based on new acoustic evidence, it is not dependent on being the highest peak on an optimized search probability curve from inexact data with narrow assumptions. It only needs to be plausable. I believe it is.

]]>1. You said: “I should not have said “Never” about the the non-flaperon particle drifters never approaching the discovery sites. I was looking at the lower probability at far south latitudes, and noting that many of the low windage set appeared to be past the 1028 day window compared to the faster flaperon set.”

You appear to draw the conclusion that arrivals after the calculation time window closes somehow lower the drift probability relative to other latitudes. That is not the case. The drift probability is the PDF, as a function of assumed crash latitude, that debris drift trials are predicted to MATCH the finding locations and the ranges of arrival dates. Even if a particular latitude had predicted arrivals after 1,028 days (which we cannot know, since the tracks are not calculated) that does not affect the probability of matching the MH370 debris reports. We start with an equal number of trials at each crash latitude bin. Then, for each finding location, we count how many of those trials match both the finding location and the range of possible arrival dates. The crash latitude which has the highest number of matching trials has the highest probability. This calculation does not measure, nor does it require us to know, how many trials might have arrived after the end of the calculation window. Since the calculations end at 1,028 days, we can’t know how many might have arrived after that date. We can count those which are still adrift, not having arrived anywhere, but even that number is immaterial to our calculation of drift probability. The property we want to know is what fraction of the starting number of trials arrived at finding locations within the allowable time windows. The area-normalized variation of that likelihood with latitude is the drift probability PDF. Your conclusion that your observation (“many of the low windage set appeared to be past the 1028 day window”) somehow infers lower relative matching probability is incorrect.

2. You said: “Your fig 10.1-1 of course shows 34S arrivals, because you have selected the data that way, excluding any tracks that don’t reach all the discovery sites twice.”

I don’t know where you got the notion that figure shows tracks which reach the discovery sites twice. The UI text in Section 10.1 says: “In this figure we selected the one trial per debris site which gave the closest match in the time and distance windows.” The purpose of this figure is simply to show that CSIRO drift tracks exist, which have predicted arrivals consistent with all the debris reports we analyzed, and which start from the location of the UGIB LEP. No debris site in UI has zero predicted arrivals from 34S.

3. You said: “I’m not doing a simple inverse of distance that would indeed give infinite values on an exact match. I take the inverse of each distance plus an uncertainty estimate. I started with 100km, which is actually larger that your 10-56 km cutoffs.”

Equal distance weighting within a cut-off range made sense to us, and that’s why we did it in UI. Our distance limits were optimized for each finding location, and they ranged from 18 – 55 km. If I correctly understand what you wrote, your FOM is now equal to a constant divided by the sum of the distance and 100 km. Therefore, your FOM at 100 km is half of what it is at zero distance. That’s certainly better than using just the inverse distance, as you first described. However, the 100 km is still larger than needed for good trial statistics, and this reduces the latitude selectivity (i.e., it “smears” the crash-latitude PDF).

4. You said: “I think you misunderstand that there is no smearing of arrival times, or any time sensitivity at all.”

There is smearing and bias in your crash latitude plots because (a) you allow all possible arriving times with no temporal selectivity and (b) you combine all finding locations in a single plot with no optimization among the finding locations. You don’t even disallow predicted arrivals after the finding dates. If you allowed only trials which match the plausible ranges of arriving dates at each finding location (i.e., use a time window which is unique for each finding location), you get much improved latitude discrimination (which is less” smeared” and less biased in crash latitude). The significant advantage of employing both spatial and temporal discrimination versus only spatial discrimination is demonstrated in Section C.3 in UI: “The lack of selectivity when using debris reporting locations only is caused by the fact that most debris from Arc 7 are carried westward by the combined West Australia, South Equatorial, and East Madagascar Currents and so end up in mostly the same locations. The more important discriminator is the variable length of time required to reach the westward currents from different parts of the arc. Therefore, the arriving times add significant information which enable a precise POI-latitude determination that is not possible with only several dozen debris recovery locations and no arriving times (as demonstrated in Figure C.3-1 above).”

5. You said: “My charts don’t favor my own candidate site with the largest peak of all, but they do show that the probability is increasing again toward Java, with a peak there that is higher at 8.32S than any value north of 23S.”

The four “likelihood” plots you showed in your second link show values at 8.32S which are much LOWER than likelihoods at latitudes SOUTH of 23S. On that point we can agree.

]]>I should not have said “Never” about the the non-flaperon particle drifters never approaching the discovery sites. I was looking at the lower probability at far south latitudes, and noting that many of the low windage set appeared to be past the 1028 day window compared to the faster flaperon set.

Your fig 10.1-1 of course shows 34S arrivals, because you have selected the data that way, excluding any tracks that don’t reach all the discovery sites twice.

True, you can see that there is a weak contribution from distant sites. It’s noticible as a fog on the left edge of the plot, brighter at the bottom where the roaring 40’s latitudes on the are farthest west.

I think you misunderstand that there is no smearing of arrival times, or any time sensitivity at all. Any integration over time is the slight fog on the left where no arrivals are possible, which merely added to all values and easily distinct from nearer proximity contributions.

1: I’m not doing a simple inverse of distance that would indeed give infinite values on an exact match. I take the inverse of each distance plus an uncertainty estimate. I started with 100km, which is actually larger that your 10-56 km cutoffs. For kicks, I did try setting a value as low as 10km, and got very sharp spikes over time. In fact, the graph I shared used an uncertainty of 500km, because there’s little difference between 100 and 500 km. Using a broader uncertainty is compensated by raising the proximity result (FOM) to a small exponent for contrast. At 500 km, the exponent is 1.4.

2&3: There is little if any bias toward slower debris. I suspect just the opposite. For any one day, the plot is simply showing how close the unbeached particles have gotten to all of the discovery sites. Look how the faster debris in green arrives in sharper waves. The CSIRO non-flaperon dataset in red are showing a similar pattern of arrival, but generally delayed from the green and more stretched over time. They may have been more likely to wander in eddy currents. Farthest south, many probably went east past Oz. (I could try a separate proximity plot for debris going east of the coast). If theres a bias, it might be that modeled debris is more likely to beach in clusters on a broad coastline of Madagascar or Africa rather than a small island.

Before seeing your response, I was working on a slight modification of the algorithm. Instead of zeroing proximity only when a particle has beached, it is now also zeroed when a drifter is receding away from a discovery site. This brings out some detail about debris caught in eddy currents that I had suggested in 2020, where debris from the lower arc wanders until it is caught in the SEC, and the higher latitudes head straight west in the current and then get caught in eddies. Here’s the new intensity plot:

https://drive.google.com/file/d/1DKIZXy05kpnkK5taC2UzxkUDSPJbJLvJ

I’ve also computed a histogram summing the two datasets:

https://drive.google.com/file/d/1DHPKOUohi7OrqLIaYaPJUj5ML9LPv-gl

Oddly, my histograms are showing results almost opposite to yours, with a null between 32S-35S in both datasets. To emphasize my point about multiplying low probabilities, I’ve done just that for only the final two summed sets, labeled as “PDF product”. You can see that for certain latitudes (like an odd notch at 33.4S), the result goes from likelihood of 0.14 down to 0.024. I don’t think that matches reality any more than the CSIRO model accuracy, or the very large peak at 30.2S. If I used fewer histogram latitude bins, it would smooth out. The CSIRO dataset was randomized by starting longitude in a swath around the arc. If I instead either narrowed the swath or computed a more accurate starting latitude by distance from the arc, we might get a slightly different answer.

I combined debris finds that had duplicate discovery site lat/lon if they arrived at different times, but didn’t group by wider neighborhood. That seems correct, but tiny changes might shift the result.

As I mentioned in the past, the CSIRO tracks appear to split more north of Madagascar than other modeled drift studies using randomizations of windage.

A key difference between your approach and mine is that I am using the entire datasets. None of the 86,400 drifter tracks are excluded, and they are given equal weight. It seems like common sense to me that nowhere along the 7th Arc should the drift or search probability be cut off to zero. My charts don’t favor my own candidate site with the largest peak of all, but they do show that the probability is increasing again toward Java, with a peak there that is higher at 8.32S than any value north of 23S.

(BTW, the unoptimized python code is using a single CPU core to crunch the results in 96 seconds, so iterative changes are fairly quick). Making separate plots for each discovery site should show unblur the waves of arrival.

]]>You posed some good questions.

1. You asked: “Does this mean that that for any endpoint latitude (e.g. S36), changing the speed changes the starting point (19:41) and probability?”

Yes. Changing from LRC to MRC, for example, reduces the speed by about 2% and therefore the 19:41 location must be about 2% closer (farther south) to the end point at 36S.

2. You asked: : And that the original probability was for the most probable speed without separate consideration about the fuel demands for that particular latitude endpoint?”

Yes. The UGIB results for (maximum) SIO Route Probability assumed there was adequate fuel. See Figure 5 in UGIB. It shows the route probability was still fairly high circa 39S, for example, where the fuel probability is essentially zero. At 36S the route probability was high (at LRC), but the fuel [probability was low (also using LRC). What we later learned was that there is a slower speed (MRC) which is flyable to 36S with the available fuel. However, recent route fits show the route probability for the 36S MRC route is actually poor compared to LRC. That’s because of the higher BTORs (BTOR = BTO Residual). The expected BTOR standard deviation is 29 +/- 10 microseconds. The 36S MRC Route is about 52 microseconds, so that is about 2.2 sigmas above the expected value for the True Route. The probability that this value (and higher) is due to measurement noise alone is less than 2%, so it is much more likely to be caused by systematic route errors. That’s why I previously concluded the overall probability is low for 36S because at LRC the route probability is reasonably high e but the fuel probability is low. Using MRC does not help overall – it has a good fuel probability but its route probability is very low.

]]>You said: “South of 32S, there is a common narrowish band of arrival for the flaperon dataset, but non-flaperon drifters never approach the discovery sites.”

Your conclusion about the BRAN2015 predictions is incorrect, as demonstrated by the several CSIRO reports on debris drift, as well as by UI (2023). All discovery sites in UI have close approaches from crash sites south of 32S for non-flaperon debris. More specifically, Figure 10.1-1 in UI shows examples of trial drifter Tracks from 34S arriving at all the debris locations analysed. Thus, your statement that “South of 32S . . . non-flaperon drifters never approach the discovery sites” is incorrect.

I suspect your error is mostly driven by your assumption of a figure of merit which ignores actual arrival dates and includes non-zero contributions for every day of every drift track. This allows a large number of days with large distance errors to contaminate your FOM. Your assumed FOM, which is the inverse of the distance from a finding location, introduces “smearing” and bias in both the crash latitudes and the arrival dates. Your FOM can’t tell the difference between having a very large number of days with very large miss distances and a smaller number of days with smaller miss distances. Common sense says that an analysis method which cannot do this is unlikely to be effective in predicting crash latitude. In UI we avoid this deficiency by allowing only one day from each trial, when the miss distance is the minimum.

To compare and contrast your FOM with UI:

1. Your FOM over-weights very small miss distances, which are smaller than the geographical location error of the drift model. That is, within the model localization error distance, the probabilities are close to being equal, because there is no basis in the model generating the drift tracks for distinguishing a difference in probability between a 1 km miss distance and a 10 km miss distance, for example. That’s why in UI we use a “flat-topped” miss distance window with a radius of 10-56 km. Within that window, we consider the probability of arrival to be equal, because we can’t know any different as a result of the drift model localization error. Section 11.1 in UI discusses the Bayesian PDF we used for the BRAN2015 localization error.

2. Your FOM over-weights trials with slower average speeds because you reward every day along a drift track at every finding location. In fact, there is only one debris per trial, so the maximum possible number of arrivals is the same for all trials – exactly 1 – independent of the average speed. In addition, your probability function biases the arrival times to later dates because it rewards slow tracks which don’t exit the calculation window.

3. Your FOM over-weights tracks with very large miss distances because your assumed FOM has too slow a cut-off with miss distance. Your FOM means that many days at large miss distances can overwhelm a smaller number of trials with small miss distances. The result is a skewed and smeared probability plot which ignores the actual finding dates of debris and which has reduced crash-latitude selectivity.

]]>You wrote: ”I also note that the Route probability is significantly lower at MRC than it was at LRC for 36S (as plotted in UGIB).”

I apologize for not being well familiar with the route fitting process, but the UGIB route PDF shows almost the same probability for some parts around S36 as the peak probability around S34.3. Does this mean that that for any endpoint latitude (e.g. S36), changing the speed changes the starting point (19:41) and probability? And that the original probability was for the most probable speed without separate consideration about the fuel demands for that particular latitude endpoint?

]]>This will make it much easier to understand the plot.

]]>Interesting. Victor’s critique of Prof Chari’s model is lack of wind effects. Assuming that is true, Chari’s model does show the debris timing approx as observed. ]]>

Thanks for the link to David Griffin’s CSIRO drift model data cache.

I’ve taken a deep dive into tracking where the two datasets match up with where debris was found. One is labeled “flaperon”, and the other “non-flaperon”. Within the metadata there are further classes of rounded debris that catch windage vs honeycomb and flat materials that drift more slowly with the surface currents.

CSIRO didn’t randomize windage, instead they varied the starting points with some radius from the 7th Arc, and split the dataset into windage and flat segments.

With those 2x 86,400 tracks, we can see how close they come to actual debris finds. For each day of drift, I calculated the distance from every modeled drifter to the discovery locations. Assuming poor accuracy (to avoid weighting a few good matches), I took the inverse of the distance from each drifter to every discovery site over time, and summed the results.

I then binned the results by latitude of the starting points. This plot is for 30 lines per degree of latitude:

https://drive.google.com/file/d/1DHOKxmH94c2Aqipq1IwplRcHSqJz1Ztb

Green is for the high windage flaperon drift dataset, and red is for flatter debris that drift more slowly. When a drifter stops moving, like it’s stranded, the proximity is zeroed.

The graphic is fascinating. I expected to to see waves of arrivals from debris arriving each grounding site. Instead there are peaks as debris arrives at multiple sites. It’s clear that slower debris wanders much longer with more southern latitudes. Below 32S, most of the low windage items are beyond the 3 yr calculation window.

There is a curious quirk at a narrow band around 30S, where all types of debris arrived within 200-400 days of where debris was found. Around 22S there is a mimimum, where all debris traveled fastest to discovery sites. From 23S to 32S, there is a wide band where flaperon-like debris lands, but slower debris drifts for years in proximity to the sites. South of 32S, there is a common narrowish band of arrival for the flaperon dataset, but non-flaperon drifters never approach the discovery sites.

The plot near the Java candidate at 8.32S shows a good mix of flaperon-like arrivals at 300-400 days, with non-flaperon arrivals landing around 500-600 days.

This plot might be improved by breaking out each unique discovery site (20) by latitude. I’m looking forward to any further suggestions, or insights.

— Ed Anderson

]]>you say…”My sharp cutoff at -35 deg is not due to lack of fuel (I agree that there is enough fuel to reach to at least -36) but rather an increasingly poor match to the BFO data.”

That is interesting because for example the IG historic path to 37/38s eg by FFYap shows excellent match to BFO, except maybe Arc6 may be where that 37/38s path starts to diverge from a best BFO match. Part of the problem critiquing 37/38s path believers, is the match to BFO is so darn good at least to Arc5.

]]>We know that SIO routes using LNAV are flyable with the available fuel at least as far down Arc 7 as 36S.

However, as sk999 recently and correctly pointed out, for Arc 7 latitudes north of 37S, a delay is necessary to connect the post-18:28 location near N571 to the fitted SIO route. That delay is about 20 minutes for the 34.2S UGIB SIO Route. For the 36S SIO Route, a delay of about 8 minutes is necessary to connect the FMT Route to the SIO Route. Taking into account the multiple means of satisfying the 18:40 BFOs, I have concluded that three turns would be needed to transition from the N571 right-offset path to the LNAV SIO Route at MRC. It’s not possible to achieve even a marginally acceptable 36S SIO Route probability without having at least three turns or two turns plus a HOLD. I also note that the Route probability is significantly lower at MRC than it was at LRC for 36S (as plotted in UGIB). At MRC the BTO residuals increase to about 55 microseconds RMS, based on my modeling results. If anyone has achieved much lower BTO residuals at MRC, please post your route particulars. This apparently poor BTO fit substantially lowers the MRC SIO Route Probability compared to that of LRC (which has about 34 microseconds RMS BTO residuals). Thus, there is a significant penalty in the BTO fit caused by reducing the speed to MRC so sufficient fuel is available to fly until MEFE. This reduces the SIO Route probability at MRC compared to LRC. So, if my MRC route fits are reasonably optimized, the 36S route is flyable but suffers from a poor BTO fit, and it requires a fairly complex FMT Route.

]]>@Niels,

For the UGIB sensitivity studies shown in Figures G-5 to G-7, the parameters which are fixed are listed in each figure under the title. These are fixed at the optimized values shown in Table G-1. Parameters not listed were allowed to vary. For example, in Figure G-5 (the longitude sensitivity study), the fixed parameters are LNAV at 180 degrees, FL 390, and LRC. Therefore, as the longitude of the route is fitted and plotted, the 19:41 latitude (not listed) is also allowed to vary for each fit. You must do that or you are not isolating the longitude sensitivity. In Figure G-6 we see the 19:41 latitude sensitivity with all other parameters being fixed. In Figure G-7, we plot the bearing sensitivity. When you do this, you have to allow the 19:41 latitude and longitude to vary as well as the bearing, because you can’t fully optimize the bearing unless you also allow the starting point to float. In G-7 the only fixed parameters are LNAV, FL390, and LRC. Thus the 19:41 latitude and longitude are allowed to vary for each of the bearing fits. The only way to isolate the bearing sensitivity is by excluding non-optimum starting locations.

I am gratified to learn that Steve’s slight tweak on the BFO probability produces even better agreement between our predictions of overall SIO Route probability.

Steve is correct that fitting up to six variables with complex figures of merit is not for the faint of heart. Good initial guesses help a lot, as well as using both forward and backward derivatives. Sometimes I also used the “Multistart” feature in the EXCEL SOLVER, which generates multiple nearby starting points that can detect a global minimum you can’t otherwise reach with the maximum-descent algorithm.

]]>The links to the CSIRO drift results can be found at the bottom of this article which discusses the UGIB 2020 paper:

https://mh370.radiantphysics.com/2020/03/09/new-report-released-for-mh370-search/

1. The current links to the 2023 drift paper on this site redirect to a filename ending in _old.pdf which might explain the differing page numbers. I found no newer link. BTW, in the current _old file, the TOC page numbers are off by one.

2. The CSIRO drift patterns don’t show zero probability for a Java site, as you showed in figure C.3-1 using only debris locations. It your time constraints that result in zero probability.

If low probability of location matches is the reason only 17 debris sites can be analyzed, that may indicate that the drift model isn’t accurately predicting the paths to where debris was actually found.

3. I agree that the flight path to Java requires turns, which cannot be covered by your models. It’s why I said the report does not apply to the Java site. The candidate is based on very specific new acoustic evidence. Route optimization methods are not needed to search the site.

DrB: “The flaperon arrival at Reunion is unlikely to have been missed if it beached at an earlier date, because the beach cleaners who found it made daily clean-ups.”

That’s an odd assumption, that the flaperon would show up in the exact same spot months later. Roy moved 8 km in four months. The flaperon could have been stranded earlier at any of the nearby islands before being found at Reunion. It was reported as seen months earlier, but we don’t know exactly where.

@VictorI:

Thank you for acknowledging that the Java site has a relatively precise location that makes it worth checking, even if it doesn’t match previous assumptions.

I don’t quite get the need for such elaborate constraints in the drift modeling statistics. You mention available data sets. If the CSIRO drifter tracks are available for another meta-analysis, I’d be curious to run them through python Pandas for gathering some simple histograms using all of the daily data weighted by the inverse distance from where debris was found. Grouping by transit speed might also be interesting.

]]>I agree. I was only trying to better explain what Ed meant.

The Java candidate site has many strikes against it, including complexity of the path and the disagreement with drift modeling results. However, I don’t exclude it with 100% certainty because we don’t really know whether there were pilot inputs and we don’t know for sure the accuracy of the CSIRO drift model, not to mention that we simply don’t know what we don’t know. What is attractive about Ed’s site is the precision of the location of the acoustic event along the 7th arc, which means it can be searched relatively inexpensively. For that reason, I don’t completely dismiss it, even though I think there is a considerable higher probability that the impact site is elsewhere.

Within the constraints of our assumptions, the available data sets, and the accuracy of fuel and drift models, I don’t know how we could have done much better.

]]>You asked, “… are you both referring to the same correlation coefficients here?”

I can only speak for myself, That was certainly my intent. I would use somehwat different language – e.g., where UGIB wrote “r for Leg Start BFOR to Leg End BFOR” [Appendix G1, #5], I would say that it was “BFO autocorrelaton at lag 1 step” where “step = leg”. To my mind, they say the same thing.

You also asked “… how can we explain the difference in your findings?”

Once again, I can only speak for myself. The best comparison between my findings and those of UGIB is given in the linked “updated route and fuel probabilities” from the DrB’s comment on Nov 15, 2023, Figure 5. While there are differences in detail, we both find a broad range in the probability distribution of the arc 7 latitude. My sharp cutoff at -35 deg is not due to lack of fuel (I agree that there is enough fuel to reach to at least -36) but rather an increasingly poor match to the BFO data. Upon further reflection, I have probably over-weighted the BFO data, so the cutoff at -35 should probably be more gentle than what I have drawn, which would give better agreement with the UGIB curve. These distributions are also broadly consistent with the results of my 25 simulations.

As far as the “narrow peak” in UGIB Fig 19 (repeated as Fig G-7), the description of this calculation is given as follows: “… we explored the sensitivity of the route probability to varying one flight parameter at a time, while the other flight parameters were constrained.” Without knowing what the constraints are, it is difficult to comment.

On a technical note, I use a gradient descent algorithm to determine the best-fitting route parameters, and while UGIB do not state which algorithm they used (other than that they used Excel Solver), I suspect that it was the gradient descent algorithm as well. A requirement for any such algorithm is that the objective function being minimized have a “smooth” dependence on the route parameters (and specifically, that the objective function be twice continuously differentiable.) That is not the case here. The culprit is the fact that we interpolate in the GDAS table in 4 dimensions at each step along a route. Such interpolation introduces discontinuities in the derivatives as one crosses cell boundaries. The proper procedure would be to fit cubic splines to the tabular data, but doing so in 4 dimensions is rather daunting. The net result is that, while the minimizer can arrive close to a true minimum, it sometimes ends up in left field. I used the good solutions to derive inital values for certain of the route parameters that improved the likelihood of converging to the true minimum for the remaining routes. UGIB have a worse problem because their equation G-2 involves an absolute value of the Z-score, which introduces a large discontinuity in the 1st derivative at Z=0.

In case you are interested, here is a detailed explanation of the equations that I use:

https://drive.google.com/file/d/1IjLUTD-vNuVzlfXRY-ajJF9aN1NBbM2a/view

]]>@370Location,

Victor said: “@DrB: Relative to earlier arrivals, I think @370Location means northern latitudes were calculated to have a near zero probability because the predicted first wave of debris was well before the reported recovery date. He has made the observation before that we can’t know for sure when the debris actually beached, the presence of barnacles notwithstanding.”

The flaperon arrival at Reunion is unlikely to have been missed if it beached at an earlier date, because the beach cleaners who found it made daily clean-ups. It’s also highly unlikely to have been stranded on coral reefs near the shore. Photographs of the rocky beach near Saint-Andre when the flaperon was found show no signs of shallow offshore reefs which could snag a floating debris. If a debris not stranded near the shore, then the presence of many live barnacles does indicate a very recent arrival, consistent with the finding date.

From Section 4.3 (Reporting Delay) in UI (2023): “The reporting delay is bounded by a range of many months’ duration for barnacle-free debris, but it is otherwise free to vary within its bounds because we don’t know the actual arriving date, only that it was found after an unknown but possibly considerable length of time. Barnacle-free debris are typically less effective in discriminating crash latitude because the arriving date is loosely constrained. Barnacle-free debris depend primarily on their finding locations to discriminate crash latitude, rather than on their arriving date.

We allow the reporting delay (delta) to be from 10 to 150 days for debris with no barnacles attached, from 5 to 30 days for the two debris we analyzed (D9 and D23) which were found with a few barnacles attached, or zero days for the flaperon (D2), which was found with many barnacles attached. Thus, the estimated arriving date has an allowed range of values which depend on the number of barnacles on the found debris.”

In UI (2023) we allowed the reporting delay to be up to five months for each latitude bin, so no latitudes were penalized because the reporting delay was long or uncertain.

]]>You said: “The PDF on the last page 89 showing the combined probability without the aerial exclusion may be the most realistic for the consensus search area range. It still uses the dubious time selection exclusions for drift, and assumes no turns giving zeros for the other three PDF components, which make it inapplicable to my candidate near Java.”

1. There is no page 89. Perhaps you meant page 83. Regarding aerial search, since none was done near Java, we used an aerial search probability of 100% [see Figure 14.5-1 in Ulich and Iannello (2023)]. That does not exclude or penalize Java.

2. Regarding the drift probability, Figure 12.1-1 in UI (2023) shows the probability at Java latitudes is essentially zero, based on the CSIRO drift simulations. There is no time exclusion since all latitudes are processed identically and over the entire calculation window. Very low probability events such as “Roy” cannot be processed to estimate crash latitude (as explained in that paper), even with the large number of predicted drift tracks available from CSIRO. Not including any single debris does not induce a bias in the crash latitude prediction. It only reduces the precision of the latitude prediction.

3. It is impossible to predict the probability of MH370 SIO routes by matching the Inmarsat data unless one assumes the condition there are no major turns or speed changes along the route. We don’t know yet whether this condition is true (and we won’t unless the FDR is found and the data are successfully recovered). However, UGIB and others demonstrated there is no necessity of turns or speed changes to fully explain the Inmarsat data. Since no simple route to Java is consistent with the Inmarsat data, our conditional probability for a crash there is essentially zero. That prediction does not prove that did not happen, because it is possible (although I think it is unlikely) the assumed condition was violated.

]]>Right. We can’t know when debris actually arrived in an area based on when it was later found beached.

I think we’ll have to agree to disagree on the joint probability methods. I suggest summing these multimodal distributions for overall maximum likelihood, and exponentiate if you want a sharper peak.

I’ve taken another look at your 2023 drift report. I have to again conclude that it doesn’t apply to my candidate site. Page 11 of the report notes that “Roy” was excluded, presumably because of too few drifter hits. It appears that 12 locations with 17 pieces were excluded, and 17 analyzed.

All validation tests were only applied to latitudes 27-40S. Most of the focus of the paper was on sorting two peaks at 34S and 38S. It gets unclear which assumptions for abandoned Methods I and II were utilized in the final Method III.

Appendix A pg 67 discusses how multiplying by zero probability for drifter predictions is avoided by adding an additional bin count of one. Pg 69 talks about how there must be a minimum of two drifter location+time matches at each and every debris find location or an origin is excluded. (Again the problem of multiplying by zero when taking the joint probability of 27 flat PDFs).

The time window to include a match is that the predicted day must not be earlier than a median of 53 days from when it was estimated to be found. That is just not realistic. For example, the Rodrigues Island find was 11 months after the flaperon find at nearby Reunion Island. Such disparity can’t be explained by waves of predicted late arrivals from different eddy releases, which are considered matches when they are found very late.

Example graphs on pp 22 and 74 show that predicted drifters are at least 5 times more likely to arrive very early in a large group, and from latitudes 8-25S. Page 81 gives a PDF/histogram without the time constraint, showing that arrivals at 11S are just about as likely as those at 30S. It’s another multimodal histogram which is not selective for latitude, but it may indicate that drift really isn’t very selective for latitude.

Again, applying the late aerial searched probabilities to exclude search areas is questionable. Had the more recent and accurate underwater searched areas been included in the report and multiplied the same way, your result would be *all* zeros for the joint PDF result. The PDF on the last page 89 showing the combined probability without the aerial exclusion may be the most realistic for the consensus search area range. It still uses the dubious time selection exclusions for drift, and assumes no turns giving zeros for the other three PDF components, which make it inapplicable to my candidate near Java.

]]>You said: “Or, perhaps it doesn’t give the answer you’re looking for.”

I’m not looking for any specific answer. We took great pains to avoid bias in our analyses, and we described exactly what we did and the answers we got. That’s how an objective science is supposed to work.

You said: “Your chart shows a nil probability for a drift origin at Java, but that’s because the 2023 paper uses an even narrower window of arrival than the 2020 paper, again excluding tropical latitudes.”

That is incorrect. The earliest arrival date analyzed was 1 day after the crash. No “early arrivals” were excluded. Some plots may not have had the abscissa origin at 1 day, but that’s because there were no predicted arrivals prior to the plotted dates. No arrivals were excluded from day 1 out to the maximum delay of the CSIRO calculations.

In both the 2020 and 2023 papers we used the full latitude range of CSIRO drift predictions – from 8S to 45S – for calculating the drift probability, and we analyzed all arrival dates over the full range of CSIRO predictions. No predicted arrivals were excluded.

Yes, the UGIB (2020) fits to the 19:41 – 00:11 Inmarsat data used a flight model with a route which had no turns and no major descents or slow-downs.

]]>