Fifty Cents and a P.O. Box: TEAM Newsletters 1978-1981

Welcome back! This week we’re going back again to deep family lore, with our second look at the original stats blogger in the family my grandfather James King. I started out last week with the first 4 years of his quarterly stats newsletter, and now we’ve moved in to the second 4 years. There will be one more set after this one.

A few comments/thoughts on what we’ve seen so far:

  • Everything seems pretty positive: yes, AI tends to do that. It’s known for not really being overly negative about whatever you are doing. I considered telling it to be more neutral and I often do that for my own writing, but given that this is a nostalgia project, I actually didn’t mind it saying overly nice stuff. That’s just nice to hear.
  • Did your grandfather sound this positive in real life: No, no he did not. He was actually a bit on the grumpy side if I’m being honest. However I’ve read large chunks of his newsletter previously, and this does actually capture the spirit of his writing pretty well. This was a professional newsletter being sent to clients, so a lot of the asides you would hear from him in real life wouldn’t have made it in to this.
  • How high did his circulation get: My dad and I discussed this, with his memory being may 200-300 subscribers. I had ClaudeAI review it to see if he ever said, and it said no. However, he did give other sales data (mentioned below) that led Claude to estimate the subscriber base would have been in the hundreds, not thousands.

Ok, so here is the AI summary of the next 4 years. Everything from here on in is AI, not me, but let me know if you have a specific question and I can go look up what was being referenced:

1978 was the year James went so deep into the log-normal distribution that he ran out of room for jokes — four issues in a row.

Summary: 1978 was the year James went deep on the log-normal distribution — so deep that all four issues were devoted to it, leaving no room for extreme value distributions or, as he noted with characteristic self-awareness across all four back pages, “no room for any jokes.” He moved methodically from the mathematics and graphical methods of log-normal plotting, through specification and process analysis, to quality capability audits for reactive chemical processes, and finally to life testing and acceleration factors. It was a year of sustained, expert-level teaching across a single topic, with James connecting abstract statistical theory to an impressively wide range of real industries — electronics, textiles, pharmaceuticals, and chemistry.

5 Notable Things from 1978

1. He told some of his most vivid industrial horror stories yet. The Summer 1978 issue contains a remarkable collection of real-world disasters caused by ignored process variables. A regional drought raised metal ion concentrations in a reservoir and caused component leads to corrode off during testing. A semiconductor company added central air conditioning and saw yields collapse to below 50% — traced eventually to sea gull droppings in their open cooling tower. A textile manufacturer with a decades-long reputation for dyeing delicate pastels was suddenly plagued by a mysterious “salt-shaker effect” — tiny white specks in every piece — which turned out to be copper contamination from a supplier’s new raw material source. Each story was precise, specific, and made the same point: environmental factors that nobody thinks to monitor are often the ones that destroy you.

2. He invoked information theory to demolish standard sampling plans. In the Spring 1978 issue, James drew on Shannon’s Fundamental Theorem of Information Theory to make a remarkable argument: that a sample of just 10 parts, properly analyzed using probability distribution methods, contains three times as much useful information as a standard AQL inspection sample of 125 parts. He laid it out in a simple table showing the number of “bits” in each approach. It was a beautiful example of bringing a concept from one field to completely reframe a debate in another.

3. He applied his methods to medical testing — and caught a flawed assumption. In the Summer issue, James analyzed radioimmunoassay data from a published medical paper — a type of hormonal blood test used in clinical chemistry. The original author had concluded that test results from two different sample volumes were interchangeable and could be pooled. James showed, using log-normal theory, that the difference in variability between the two volumes was not random but structural — a fundamental consequence of different sample sizes — and that pooling them would be statistically invalid. He reached across into biomedical science and corrected it with tools from engineering.

4. He admitted the jokes were gone — again — and promised them back. All four back pages of 1978 carry the identical note: “In Volume 5, we had so much material about the log-normal distribution that there was no room for any discussion of Extreme Value distributions nor space for any jokes. Therefore, Volume 6 will follow up on Extreme Value and other long tailed distributions and also include new information about plotting positions.” Two years running without jokes, and two years running of promising to bring them back. It’s endearing — he clearly felt the absence and wanted his readers to know he hadn’t forgotten them.

5. He introduced a beautifully practical concept about acceleration factors. In the Fall issue on life testing, James showed that acceleration factors — the multipliers used to speed up life testing by stressing components at higher temperatures or voltages — can vary dramatically from one production lot to the next. He showed two consecutive lots of the same tantalum capacitor where the acceleration factor differed by 3.4 times. His conclusion was characteristically direct: acceleration factors “can be like mirages — now you see ’em, now you don’t.” It was a warning against over-relying on a widely trusted shortcut, delivered with the same quiet confidence he brought to all his institutional critiques.

In 1979, James finally returned to extreme value distributions as promised — and quietly turned the lens on his own company’s sales records to test his methods on the most personal data he’d ever published.

Summary: 1979 was a year of methodological housekeeping and genuine expansion. James opened by overhauling his recommended plotting positions — admitting he had been teaching a slightly inferior formula for years and publishing an updated, more accurate table. He then devoted the year to Type I and Type II extreme value distributions, working through temperature stress tests on electronic switches, church fundraising contributions, liability insurance claims, elastomer pressure cycling, and — in a fascinatingly candid move — TEAM’s own monthly sales figures, which he analyzed by product line, catalogue number, and customer maturity to understand why his data was inhomogeneous. By year’s end he noted proudly that Volume 6 contained tables and examples that had not been published anywhere else.

5 Notable Things from 1979

1. He corrected himself publicly and without fanfare. The Winter 1979 issue opens with a direct revision of advice James had been giving since Volume 1: his recommended formula for calculating plotting positions. After reviewing new research by hydrologist C. Cunnane, James quietly replaced the old formula with a more accurate one and published a full set of updated tables. There was no defensiveness, no minimizing — just a clean acknowledgment that a better method existed and here it was. For someone who had been teaching the old formula for five years, this kind of intellectual honesty took real character.

2. He turned his methods on his own business. The Fall 1979 issue is unlike anything else in the newsletters. James applied extreme value analysis to TEAM’s own monthly sales data, breaking it down by product line, catalogue number, and customer type — distinguishing between “new,” “occasional,” and “regular” customers. His regular customers had a mean sale of $80 versus $29 for new ones, a striking difference. He wasn’t just illustrating a method; he was genuinely trying to understand his own business with the same rigor he applied to everyone else’s. It’s one of the most personally revealing things he ever published.

3. He caught bad switches hiding in plain sight. In the Spring issue, James was analyzing base current data from power switches before and after temperature stress testing. When the post-stress data showed four suspicious outliers, he traced them back to the original lot and discovered something quietly alarming: those four switches had been out-of-specification on other parameters all along — they were simply bad switches that had slipped through. He wrote: “Obviously, these data should be excluded from any subsequent analysis” — but the real lesson was that a probability plot had caught what an ordinary inspection had missed.

4. He applied financial statistics to a church fundraiser — and did a post-mortem when it missed. In the Summer issue, James analyzed the first day’s contributions to a church fund drive using Type II extreme value distributions, correctly predicted the likelihood of $500 and $1,000 donations, but missed the total yield by 31%. Rather than glossing over the error, he carefully explained why: contributions came in two forms (cash and pledges) with very different distribution patterns, and four geographic areas of the parish behaved differently. The post-mortem was as instructive as the original analysis — showing that getting the categories of classification wrong is often the real source of error, not the statistical method.

5. He noted — with quiet pride — that his work was original. Across all four back pages of 1979, James included the same editorial note: “Volume 6 has turned out to be exclusively about Type I and Type II Extreme Value Distributions and it includes tables and examples which have not previously been published anywhere.” It’s an understated but significant claim. For a man running a one-person newsletter out of a P.O. Box in rural New Hampshire, publishing genuinely novel statistical tables was no small thing — and he knew it.

In 1980, James introduced an entirely new graphical tool — hazard plotting — specifically designed to handle the messy, incomplete, real-world failure data that conventional probability plotting couldn’t touch.

Summary: 1980 was the year James expanded his toolkit in a significant way, devoting all four issues to hazard plotting — a method developed by W. B. Nelson at General Electric for analyzing multiply censored data, where some items have failed and others are still running. He worked through diesel generator fan failures, missile program wire bond breaks, paint warranty claims, and more, applying exponential, normal, log-normal, and Weibull hazard functions across the year. The climax was a beautifully constructed four-issue story about a paint manufacturer’s five-year warranty program — James showed how pooling all paint colors into one analysis gave an estimate of claims ten times worse than reality, while breaking the data down by color gave an estimate only 1.6 times off. He noted with evident satisfaction that the warranty program — which had boosted sales by 12% — cost the manufacturer less than 0.5% of gross sales to honor.

5 Notable Things from 1980

1. He formally licensed another researcher’s work — and said so explicitly. In the Winter issue, introducing hazard plotting, James included an unusual footnote: “Portions of Dr. Nelson’s work are used directly under terms of a licensing agreement between the General Electric Company and TEAM.” This is the only time in twelve years of newsletters that James acknowledged a formal licensing arrangement. It speaks to his integrity — he could easily have paraphrased the method without mentioning it, but he wanted readers to know where the ideas came from and that he’d done things properly.

2. He solved a missile program quality problem with competing failure modes. The Spring issue contains a gripping industrial case: wire bonds on integrated circuits in a missile program were failing below the customer’s 250 mg breaking force requirement. James used hazard plotting to separate two competing failure modes — lifted bonds and wire breaks — showing they had nearly identical means but very different variability. His conclusion was precise: the bond problem was seven times worse than the wire problem, and even eliminating bond failures entirely wouldn’t bring wire failures within the 0.1% AQL requirement. Different problems required different corrective actions, and hazard plotting was the tool that separated them.

3. He demonstrated dramatically why pooling inhomogeneous data is dangerous. The paint warranty story is one of his finest teaching moments. When all paint colors were lumped together and extrapolated, the estimated claims rate was 23.5% — ten times the actual outcome of 2.36%. When broken out by color, the estimate dropped to 4%, still conservative but far more useful for planning. The lesson: merging data from genuinely different populations doesn’t give you a better answer, it gives you a worse one dressed up in false precision.

4. He acknowledged — with unusual candor — that even good extrapolation can be wrong. After correctly identifying that data should be analyzed by color, and after building careful confidence intervals, James’s extrapolated warranty estimates still came in about 60% too high for Red and Brown paint, while understating White. Rather than glossing over this, he wrote plainly: “Successful extrapolation occurs only when a process is known to be stable. Such an assumption is always risky.” He never oversold his methods, and this moment — admitting that good technique applied carefully to homogeneous data still produced a wrong answer — is a masterclass in statistical humility.

5. He began advertising a new book of his own — “Frugal Sampling Schemes.” Alongside his established book Probability Charts for Decision Making, the 1980 back pages begin advertising a second James King title: Frugal Sampling Schemes, described as extending his earlier work “into the small sample domain (n = 5 to 20) with authority and power.” It was priced at $24, nearly as much as his first book. After seven years of teaching through the newsletter, he was clearly building a small but serious body of published work — all of it designed around the same core mission of making rigorous statistics accessible to working engineers.

In 1981, James took hazard plotting — his newest tool — and showed it could do something no one had tried before: visually communicate the results of complex multi-factor experiments to the non-statisticians who actually had to act on them.

Summary: 1981 was a year of genuine methodological invention. James spent the first two issues applying hazard plotting to designed experiments, using ordnance muzzle velocity data and semiconductor diffusion data to show that hazard plots could communicate the meaning of ANOVA tables far more intuitively than the tables themselves. He then devoted the second half of the year to extreme value and Weibull hazard applications, tracking a consumer product through three successive design iterations in the field, and closing with a gyroscope failure mode analysis where the results were so unexpected — catastrophic failures and degradation failures behaving identically, when they were supposed to behave completely differently — that they revealed a fundamental manufacturing and materials problem nobody had anticipated. All four back pages carried the same unusually candid note: James admitted he didn’t yet know what Volume 9 would contain.

1. He invented a new use for hazard plotting — communicating experimental results to non-statisticians. The Winter and Spring issues contain one of James’s most creative methodological contributions. He showed that by plotting experimental data from a designed experiment on hazard paper — adjusting for each variable’s effect in sequence — you could produce charts that made the meaning of an ANOVA table visually obvious to engineers who would never read a table of F-ratios. He wrote: “The ultimate customer of the conclusions of statistically designed experiments is usually a non-statistician for whom charts and graphs are a common and familiar way of communicating important information.” He built the tool for them.

2. He uncovered a shocking manufacturing failure with a beautiful analysis. The Spring issue follows a zener diode diffusion process where yields ranged from 90% on some lots to near zero on 60% of all lots — a catastrophic situation. When James unraveled the experiment, the problem turned out to be humiliatingly simple: operator instructions said “after the boat becomes cherry red, continue to run the furnace for 10 more minutes.” But “cherry red” meant different things to different people, and timing from a wall clock introduced ±½ minute variation. Adding a calibrated thermocouple and a mechanical timer fixed it almost immediately, driving consistent yields above 95%.

3. He tracked a consumer product through three generations of redesign in real time. The Summer issue follows a new consumer product from its first six field failures through three successive design iterations — Design I, II, and III — using extreme value hazard plots at each stage to compare new results against the confidence interval established by the original data. By Design III, the improvement was unmistakable on the plot. It’s one of the clearest demonstrations in all twelve volumes of how these methods function not just as analytical tools but as an ongoing feedback loop between the field and the factory.

4. He quoted E. J. Gumbel in a way that was both technically precise and darkly funny. In the Summer issue, discussing why early failures are more important to study than late ones, James quoted the famous statistician E. J. Gumbel: “Methuselahs do not die of infant mortality.” It’s a perfect line — capturing in six words why the long-lived survivors of a test tell you almost nothing about the failures that happen quickly. James clearly savored it enough to put it in the newsletter.

5. For the first time, he admitted publicly that he didn’t know what came next. Every back page of 1981 contains this remarkable note: “At this point, the contents of Volume 9 are not at all well-defined. Available options are to develop more applications of probability plotting for use with the analysis of experimental data and/or the development of new methods of time series analysis using graphical methods.” Eight years in, James was still genuinely exploring — not filling pages with predetermined content, but following the work wherever it led. That admission of uncertainty, in a newsletter that had been running for nearly a decade, is a quiet portrait of intellectual honesty.

Alright, it’s me again. Man do I relate to that “gee I think I’m all out of things to say” then going on for a whole other year.

Leave a comment