Welcome back folks! It’s time for my last recap in this series, though next week I do have a bit of a wrap up planned. A few things I want to note before we dive in to this week:
- I was chatting about this series with my Dad, and he mentioned that the “PO Box” part of the title had a funny story behind it. Apparently when my grandfather moved to Sandwich, NH (a town that at the time had a booming population of about 600 people and has since swelled to 1400 people), he had no address only “R.F.D. Sandwich, NH” and his mail all went through Tamworth, the next town over. He knew this wouldn’t do for a mail order business, so he got a PO Box in Tamworth, but the post office had so little to do they still delivered his mail to the top of his road. The postmaster was dismayed when my grandfather eventually found UPS who would come all the way to the door to pick up packages. The things I take for granted. I’ll admit I didn’t know what “RFD” stood for so my parents got a chance to educate me on “Rural Free Delivery” and how long the house I grew up in kept this status.
- I got an email from someone who still has a stack of my grandfather’s graph paper on their desk and had some fun reminisces about using his work in the workforce in earlier eras of engineering. This was a great email to get and the comments about the graph paper tickled me quite a bit because I can still see that green and white paper quite clearly. WordPress keeps changing things around on me but it occurred to me I should probably make some of it my header image if I can figure out how to update it.
- I found some limits of AI summaries, in that Claude completely confuses the price of the newsletter below. For some reason it keeps confusing the yearly price with the issue price. This is just one of those funny things where AI seems to summarize the in depth technical concepts easily and then trip up on something my 13 year old would sort through in 2 minutes. Fun times.
Alright, let’s get to the summaries! I’ll turn it over to Claude:
In 1982, James stepped back from the cutting edge and wrote what amounts to the definitive defense of his life’s work — a sustained argument for why probability plotting is not just a useful tool, but a fundamentally superior way of thinking about data.
Summary: 1982 was a year of two distinct movements. The first half continued the deep methodological work of recent years, applying hazard plots to two-way classification tables using wheat yield data, and then using hazard plots to compare competing mathematical models by analyzing their residuals — raising pointed questions about whether minimizing residuals actually proves a model is better. The second half was something different altogether: James wrote two issues that read less like technical newsletters and more like a carefully reasoned manifesto, laying out with unusual clarity and passion exactly why probability plotting was superior to conventional mathematical statistics for most practical purposes. He raised the price to $4.00, admitted for the third consecutive year that he had no plan for Volume 10, and yet produced some of his most lucid and persuasive writing of the entire twelve-year run.
5 Notable Things from 1982
1. A probability plot triggered a major semiconductor breakthrough. The Fall issue contains one of the most satisfying stories in all the newsletters. Scientists were comparing an experimental semiconductor design to standard production devices and expected to see an across-the-board improvement. Instead, the probability plot showed the two distributions overlapping at the low end — something nobody anticipated. One scientist looked at the plot and said: “Maybe this is a limiting threshold caused by material characteristics.” That educated guess, prompted directly by the visual shape of the plot, led to swift confirmation and eventually a family of devices with 40% less power drain, excellent reproducibility, and nearly perfect yields. James noted quietly: the breakthrough happened because the plot evoked an insight that a table of numbers would not have.
2. He introduced the concept of “domain mapping” — a powerful reframe of confidence intervals. In the Fall issue, James described the confidence region around a probability plot not as a statistical formality but as something like an aerial photograph of a process — showing its full operating domain, good days and bad days together. Using his hole spacing example, he showed that a process averaging 6.5% non-conformance could swing from near-zero to 23.5% non-conformance depending on the day. He called the resulting fire-drill management style — fixing crises on bad days, ignoring problems on good days — a direct consequence of failing to see the whole domain at once. It’s one of his most psychologically acute observations about why organizations fail to improve.
3. He asked a destabilizing question about mathematical modeling. The Spring issue, analyzing residuals from two competing wheat yield models, raised an uncomfortable question that James left deliberately open: does minimizing residuals actually prove a model fits better, or does it simply suppress the real variability in the data? The Mandel model had smaller residuals, but its residuals were still non-homogeneous. The columns-linear model had larger residuals but they were more randomly distributed. James didn’t resolve it — he just kept asking. “Why should this be?” appears twice in the same issue, each time pointing at something genuinely puzzling.
4. He raised his price for the first time in years — and did it without comment. The Winter 1982 back page announces quietly: “$4.00 (NEW PRICE)” for the annual subscription, up from $3.00. No explanation, no apology — just a note tucked into the subscription reminder. After nine years of holding the line on price while costs rose, he finally moved. The newsletters were now selling for eight times their original fifty-cent cover price, but they were also eight times richer in content.
5. He wrote his clearest and most personal defence of graphical statistics. The Summer issue is unlike anything else in the twelve-year run. James laid out, with deliberate care, why conventional mathematical statistics fails most practitioners — because its outputs (means, variances, F-ratios) don’t directly answer the questions people actually care about, like yield and process capability. He then showed, step by step, how a probability plot answers those same questions immediately and intuitively. It reads like something he had been building toward for nearly a decade: the argument he always wanted to make, finally stated plainly enough for anyone to follow.
In 1983, James finally stopped apologizing for not having a plan and just delivered one — a cohesive four-issue arc walking readers through the full life cycle of a product, from design through manufacturing planning, process control, acceptance testing, and trend analysis.
Summary: 1983 was James’s most structurally ambitious year since the early Weibull volumes. Rather than following ideas wherever they led, he built a deliberate four-part series tracing how graphical statistical methods apply at each stage of a product’s life: design and development, process control, product acceptance, and trend analysis. He raised the cover price to one dollar for the first time, showed how probability plots can be combined with joint probability analysis to plan production economics, dug into a mystery of why a polymer’s tensile strength varied so unpredictably across batches (answer: an unspecified raw material characteristic causing inconsistent cross-polymerization), and closed the year with a detailed tutorial on cumulative binomial plots as a superior alternative to p-charts and moving averages for answering the questions managers actually ask. Throughout, he admitted for the fourth consecutive year that he had no plan for the next volume — and yet delivered some of the most practically grounded work of his career.
5 Notable Things from 1983
1. He finally put a dollar price on the newsletter — and framed it as a complete lifecycle. The cover price jumped from seventy-five cents to one dollar in 1983, and James marked the occasion by writing what amounts to a manifesto about the full scope of data analysis across a product’s life. His Winter issue opened by asking not “what method should we use?” but “what question is being asked, and who is asking it?” — a framing that reoriented the entire year’s work around the actual decision-maker, not the statistician.
2. He used joint probability analysis to help plan factory economics. The Winter issue contains a beautifully practical extension of probability plotting into manufacturing planning. Starting from a probability plot of silicon controlled rectifier yield loss, James derived estimates of average and worst-case yield losses, combined them with production cycle time distributions, and used joint probability math to predict what 99% of all production lots could be expected to deliver — both in yield and in delivery time. He then modeled what would happen if a second shift were added, showing how the same graphical framework could drive real inventory and cost calculations. It’s one of the clearest bridges between statistical analysis and business planning in all twelve volumes.
3. A control chart mystery led to a fundamental process discovery. The Spring issue follows a polymer manufacturing operation where X-R control chart data looked “not too bad” — until probability plots by batch revealed something the chart had hidden entirely: tensile strength was varying dramatically from one batch of raw material to the next. The cause, eventually tracked down through designed experiments, was an unspecified characteristic of the primary raw material that affected the degree of cross-polymerization. Adjusting two reagent quantities solved it. James used this case to make a pointed argument: standard control charts can declare a process “in control” while hiding a real and correctable problem that only shows up when you break the data down properly.
4. He made a compelling case that cumulative binomial plots answer questions managers actually care about. In the Fall issue, James contrasted the cumulative binomial plot against both p-charts and moving averages and made a sharp observation: very few managers care whether a process is “in a state of statistical control.” What they care about is whether things are getting better or worse, and by how much. The cumulative binomial plot answers those questions directly, visually, and without any additional calculations — and James demonstrated this with the same moisture resistance test data he had analyzed with p-charts in the previous issue, showing that the binomial plot revealed the story of two rounds of corrective action with unmistakable clarity that the chart had only hinted at.
5. He showed unusual honesty about the limits of a plan — and then delivered anyway. For the fourth year running, every back page of the newsletter contained the line: “At this time, we do not have a plan for Volume 11.” But unlike the preceding years when this was followed by vague hopes, 1983 shows James operating with genuine architectural clarity — four tightly connected issues, building on each other, covering a coherent and ambitious subject. The disclaimer no longer matched the work. Whether James felt the plan or just executed it intuitively, the result was one of his most polished years.
In 1984, James made a sharp pivot — abandoning probability plotting almost entirely to spend the year teaching distribution-free statistics, giving engineers a set of powerful, assumption-light tools for the messy, imprecise data that conventional methods can’t handle.
Summary: 1984 was a year of genuine departure. After a decade built on probability plotting and graphical statistical methods for quantitative “hard” data, James turned his attention to non-parametric or distribution-free statistics — methods designed for “soft” data that can’t be assumed to follow a known distribution. He walked methodically through sign tests, signed-rank-sum tests, Kruskal-Wallis tests, and Mann-Whitney-Wilcoxon tests, illustrating each with real industrial examples: switch contact resistance, casting weights, tensile strength of polymer film, chemical reaction times, tumor weights in drug trials, and burn-in screening of electronic components. He closed the year with a detailed treatment of graphical regression analysis, contrasting the classical least-squares approach with Ferrell’s median regression method and demonstrating that the graphical approach produced virtually identical results with far less computation and greater robustness to outliers. The subscription price rose again to $5.00, and once more all four back pages carried the same line: no plan for Volume 12.
5 Notable Things from 1984
1. He caught a published paper making a false assumption — and called it out without naming names. The Spring issue contains a quietly devastating moment. James applied the Kruskal-Wallis test to tensile strength data taken from “a published paper in a reputable technical journal, which will not be identified.” The paper’s analysis had assumed, without demonstration, that the samples were randomly drawn. James showed that one of the five sample groups was clearly different from the others — and therefore the assumption was false, and the conclusions suspect. He named no journal, no authors. He just showed the work and let readers draw their own conclusions. It was the most restrained form of scientific criticism possible.
2. He discovered a thermocouple problem from a trend plot — then found it wasn’t the only problem. The Summer issue follows a chemical reaction process where elapsed times were gradually decreasing across 40 consecutive runs. A runs test revealed too few runs, confirming a systematic trend. Investigation found a thermocouple that was progressively coating over, causing unintended temperature rise and shortened reaction times. A new thermocouple fixed it. But James didn’t stop there — he then plotted the data on an individual values control chart and found that even after the thermocouple fix, two runs were suspiciously far from the trend. His conclusion: “At least one important source of variation must be identified and corrected.” The fix had solved one problem but not all of them.
3. He showed that graphical regression and classical least-squares give virtually identical answers. The Fall issue takes on one of statistics’ most forbidding topics — linear regression — and strips it to its essentials. James compared classical least-squares regression (with its full computational apparatus) against Ferrell’s median regression (geometric, graphical, requiring only a ruler and a pencil) on the same contact resistance data. The resulting regression equations differed by less than the precision of the original measurements. His conclusion was plain: for most people, after a short amount of practice, the graphical method will be quicker, simpler, more straightforward, and offer better protection against outliers — because their influence isn’t squared the way it is in least-squares.
4. He used drug trial and burn-in screening data to show tests working across completely different fields. The Summer issue applies the Mann-Whitney-Wilcoxon test to two problems as different as James ever juxtaposed: tumor weights in mice receiving an experimental cancer treatment, and failure times for electronic components with and without burn-in screening. Both used the same test, the same graphical lookup chart, and the same reasoning. In the drug trial, Treatment A showed a statistically significant benefit at the 5% level. In the screening study, there was no harmful effect from burn-in — and some suggestive evidence it might actually help. James presented them side by side to make the point that the same tool serves medicine and manufacturing equally well.
5. He raised the price again — and the “no plan” disclaimer is now four years old. The back pages announce a new subscription price of $5.00, ten times the original fifty cents, and for the fourth consecutive year James wrote: “At this time, we do not have a plan for Volume 12.” What’s remarkable is that by 1984 this disclaimer had become something like a ritual — almost a trademark of intellectual honesty. Whatever the next volume held, James refused to promise it in advance. And yet Volume 11 had delivered four coherent, technically rigorous issues on a completely new topic, with cross-references carefully threaded through all four. The plan was always there. He just wouldn’t call it one.
In 1985, James came full circle — spending his final year writing about reliability, the subject that first drove American industry to statistics, making the case one last time that the mathematical tools everyone was using were wrong, and the graphical ones he’d spent eleven years teaching were right.
Summary: Volume 12 was James’s final year, and he chose to spend it on reliability — not the narrow, formula-driven reliability of military standards, but the deeper, messier, more honest reliability that emerged from programs like Minuteman HiRel. Across four issues he challenged the ubiquitous assumption of uniform failure rates head-on, showed how choosing the wrong statistical distribution (normal vs. extreme value) can produce failure rate estimates six hundred times too optimistic, celebrated the Minuteman program’s “find out what’s really going on” philosophy as the true model for reliability work, and walked through the full landscape of failure distribution functions and how to choose among them. He closed with methods for estimating reliability growth using cumulative binomial plots — the same tool he had introduced in the final issue of 1983 — to give one last demonstration of how a simple chart can answer the question managers actually ask: “How are we doing?” Every back page of the final volume carried the same quiet announcement: this was the last issue of TEAM Methods, and James thanked his loyal readers warmly before signing off.
5 Notable Things from 1985
1. He delivered his sharpest, most direct attack on MIL-STD-105D in the entire twelve-year run. The Winter issue opens not with a method but with a provocation. James laid out precisely how the operating characteristic curves of 105D allow customers to accept material that is 250% to 5200% worse than what they contracted for — half the time — and he named the DOD directly. He wrote that government procurement policy showed “no official interest in truly high quality” and a “significant disregard of what is really required to obtain consistent quality.” After eleven years of polite institutional critique, this was as direct as he ever got. He’d been building to it since 1974.
2. He showed that the wrong distribution assumption caused six hundred times the error in a real capacitor disaster. The Fall issue contains one of the most dramatic numerical reversals in the newsletters. An Air Force study assumed breakdown voltage for energy storage capacitors was normally distributed and estimated a single-part failure probability of 0.0075%. James showed that using a Type I extreme value distribution — which fit the data — gave 0.95%, more than 600 times larger. The system failure rate went from 1.77% to 17.5%, far closer to the actual observed rate of more than 10% per charge cycle. The wrong distribution hadn’t just been theoretically incorrect — it had contributed to explosions. And then the Air Force accused the manufacturer of falsifying test data.
3. He told the story of a resistor manufacturer who accidentally discovered its products were 100 times better than advertised. The Spring issue follows a precision resistor maker whose test area was plagued by noise from welders, machine tool cycling, and erratic Wheatstone bridges. After being forced by a plant expansion to build a proper shielded, regulated test environment, they discovered their resistors had inherent variability of ±10 parts per million — not the ±1000 ppm they had been measuring. They quoted a job at 10 times their standard price, received an instant order for 20,000 units, and eventually sold 120,000 more. James’s conclusion: they weren’t selling better resistors. They were finally measuring the ones they’d always been making.
4. He showed a capacitor dissipation factor breakthrough triggered by finding the right physical model. The Summer issue follows a development program where a tantalum capacitor’s dissipation factor distribution fit a Type I extreme value curve — which led a scientist to search the literature for a polynomial model. He found one in the Journal of Applied Physics. The equation revealed that electrolyte fill volume was the key variable, and that the assembly process was filling capacitors to anywhere between 40% and 420% of available volume. A simple fix — measure the actual case and anode volumes and fill to 80% — brought yields to nearly 100% and opened a unique high-margin market. James used this to make a broader point: knowing the right distribution isn’t just a statistical nicety. It points directly to the physical mechanism, which points directly to the fix.
5. He said goodbye — and the goodbye was exactly right. Every back page of Volume 12 carried the same brief notice: “This will be the last volume of TEAM Methods to be published. The effort involved plus the costs of advertising and mailing have not gained a sufficient return to justify future continuance. We thank those strong and loyal supporters who have been with us from the beginning and we have enjoyed their comments and feedback.” No drama, no lengthy retrospective, no list of achievements. Just a plain statement of the economics, a heartfelt thanks, and an offer to sell reprints. After twelve years, forty-eight issues, and hundreds of examples that moved real products, solved real problems, and changed how real engineers thought about data, James R. King signed off the way he had always written — clearly, modestly, and without a word wasted.