The BCN Podcast

How Statistical Process Control could transform Healthcare

BCN

In this episode of the BCN podcast, Oliver Chan is joined by Dr. John Bolton and Mark Graban to discuss how lean methodologies and control charts are transforming healthcare. 

They dive into Mark Graban’s journey into healthcare improvement, unpack the challenges important challenges of patient safety and quality of care , and stress the critical role of measurement and leadership in driving meaningful change.

The episode helps us understand Statistical Process Control (SPC) charts in healthcare and highlights the importance of ongoing, meaningful analysis of these charts

They also cover the struggles some leaders face in grasping statistical concepts and explain the need for a culture of continuous improvement in healthcare delivery.

Speaker 1:

Hello everyone and welcome to the BCN podcast, and on this episode we're joined by Dr John Bolton and Mark Graben for a special discussion in regards to the use of control charts and lean methodologies, particularly around healthcare. So, john, would you mind quickly introducing yourself?

Speaker 2:

Thanks, oliver. So I'm John. I've obviously been working very closely with you over the last several months, just thinking about the use of SPC and statistical process control in healthcare and how you support it. I'm a physician by background. I trained as a rheumatologist, got very interested in quality improvement and then stepped out of that as I got the opportunity to go after the IHI as a fellow. I've worked in the Middle East and then came back to the UK, mainly working in Wales over the last eight years, and my last role before moving into a consultancy was as National Director of Quality Improvement Patient Safety. Really looking forward to the conversation with yourself today, mark.

Speaker 3:

Well, thank you, Thanks to you both for having me here. And, Matt, would you mind introducing yourself really quickly? My name is Mark Graben. I'm from the United States. My background originally is industrial engineering and then about 10 years into my career, I had an opportunity to start working with healthcare delivery organizations medical laboratories, pathology departments, different departments across hospitals, including about 12 weeks that I spent in Northampton working with the NHS Northampton General Hospital there, which was a lovely experience. So my focus in the last 20 years has very much been on applications of the lean management approach to healthcare, and to me that includes process behavior charts and I've written a book on that subject called Measures of Success. And I've written some other books, including Lean Hospitals, Healthcare, Kaizen and my most recent book called the Mistakes that Make Us so I work as an author, a speaker and a consultant. I'm really, really passionate about helping people improve in healthcare.

Speaker 2:

So, mark, I'm going to open this with a question, so you've got a sort of really long background with the industrial engineering. I think you'd work for General Motors, is that right?

Speaker 3:

It's where I started my career.

Speaker 2:

Yeah, so what kind of triggered the interest or the move to actually look at healthcare?

Speaker 3:

There was a little bit of serendipity thinking back to that, more than 20 years ago. So I was still working in manufacturing. So I was working in Arizona for a company called Honeywell and within the Phoenix area we had a loose network of people like myself that were doing lean improvement work at different companies in the area, and all in different industries. Nobody was competing with each other. So about once a quarter we would get together and share what we were working on or our challenges, lessons learned. We would go visit each other. So about once a quarter we would get together and share what we were working on, our challenges, lessons learned. We would go visit each other's facilities. Well then, at one point, thankfully and this would have been, I think, early 2005, two women had left Motorola, one of those manufacturing companies in the area, and were doing consulting work with a hospital in Scottsdale, arizona. So they walked us through the emergency department or A&E, and it was just fascinating to think about the opportunities for improvement that they saw and the improvements that they made. And so that piqued my interest in the application of Lean into healthcare.

Speaker 3:

I tried reading what I could. There wasn't a whole lot being published at that point. There were some early experiments happening around the world. And then, summer of 2005, my wife took a new job in Texas, which put me on the job market. I got a call from a recruiter at Johnson and Johnson. They were looking to hire more people into a consulting group that they had that again worked with laboratories and hospitals. So you know I certainly had keen interest in, you know, trying to be helpful. You know health care is so important. My story isn't one where you know I was receiving a lot of healthcare. I know there are some physicians and others whose quality improvement origin stories start with really being a patient and seeing the systemic opportunities for improvement. Mine was a little bit different path but I'm glad I had some good mentors and some good experiences that really, really deepened the passion that I have for healthcare improvement for the patients, for the nurses, for the doctors, for everybody involved. It needs to benefit everybody and and that's, I think, a really important mission.

Speaker 2:

so that time around, 2005 was, I don't know. It was kind of that, that decade of the 2000s where improvement just exploded onto healthcare. You know, I'm sure we're going to talk about to where is human, the uh, you know, really seminal book, over the course of the next sort of 45 minutes. But you landed in healthcare when everybody was like thinking about improvement, driving improvement. But even then, and you know, still now, people say well, you know, healthcare is not like a factory. We're not Toyota, we don't do lean, it's not the same. Is that? Do you have any, any thoughts on that? Do you ever? You know, you must have those conversations with people? Is there a way that you, you know, you reflect back to, you know, physicians, clinicians, nurses, the managers, the administrators within organizations, when, when they say that?

Speaker 3:

Yes, it's been almost 20 years of having that conversation and it's absolutely a point, a question, a concern that people in healthcare should bring up, because what they're saying is, of course, factually true A hospital is not a factory, patients are not cars. So I think, hopefully, that question opens up an opportunity to discuss what I think is really the core is that methods with different labels whether it's coming from Dr Deming, total quality management, the lean methodology, what have you it's certainly not about turning the hospital into a factory. It's about helping the hospital be the best hospital it can be, or to even step back and more broadly say to provide the best health to our communities. And that might not mean simply having safer, higher quality hospital care with fewer delays and better patient flow. It might mean catching symptoms or conditions early and help people remain healthy enough to avoid traditional hospital care. And I think those discussions, of course, are a collaboration between outsiders like myself and the physicians and the administrators and everybody who's there within healthcare. If I come in pretending like I know it all and I have all the answers, I'm probably rightfully getting thrown out. I think I have some ideas, I have some good questions. If people are willing to learn. I can contribute some ideas, but it's always best in collaboration with people in healthcare.

Speaker 3:

And if we were to dig into, like, well, what do you mean by, you know, let's not turn this into a factory? And they might say, well, we can't make this a cold, uncaring, ruthlessly efficient at all costs sort of environment, I'll say, okay, well then, we won't do that. You know, if some of the pushback on lean is that, well, we need to focus on safety and quality and like not to get too academic, but I'm like, well, toyota focuses on safety and quality, like we actually have a lot of alignment, I think, in what we're trying to do. Now the questions might come down to how do we go about accomplishing that. But again, I mean, I think it's a matter of collaborating, helping people understand some of the sources of waste in the current system, that these are systemic problems that we need to develop, we need to improve our system or develop a better system, that we're not blaming any of the people, the individuals involved. I mean I think that builds connections with people in healthcare.

Speaker 3:

I try to get on the same page in terms of what are we trying to accomplish, what are our goals, and then, you know, we'll have discussion along the way of trying to identify the causes of the different problems. And the last thing I'll say is I think sometimes we have to demonstrate that improvement is possible, because I think sometimes people have become very discouraged that some of these problems around patient safety and quality have been around for a long time. You know, john, as you know, the To Air is Human report is 25 years old now, but that problem is not solved. You know, and sometimes people, I think, get discouraged or they say well, if these problems were solvable, we would have fixed it already. When it comes to, let's say, hospital-acquired infections or surgical errors or medication errors, and build some collaboration to start improving small things, that helps build confidence that we can actually affect change and make things better in small ways before we really take big leaps I was.

Speaker 2:

I was really struck, just sort of doing some reading before we did this podcast mark. You know it's obviously 100 years since. You know, shoeheart created the first control chart, which we'll get on to 25 years from. You know, to air is human. You know, probably set off and created the safety movement within health care and yet there was a research study from Harvard that was published in May 2023 that said probably the state of patient safety in hospitals at the moment is probably the same as when we actually wrote to Ares Human. We've been at this now for nearly 30 years. You know what's missing, what we're not getting right in healthcare. That you know industry seems to have got right.

Speaker 3:

That's a great question. I mean one thought that comes to mind from you know, observation and working with organizations around the world, even though you know we have very different systems on one level between the US and the UK and I'll pick another country, I've been to a lot, the Netherlands how our health system is organized, who employs the physicians, who pays for care there's differences across countries, but when you get down to how care is organized and delivered, it's very, very, very similar. These are global problems. To err is human and other studies have focused a lot on US health care and I'm not here to defend all things US health care. But the per capita harm and death rates are very similar between the US and Canada. I've seen data from other countries. I mean it's not like any country is an order of magnitude safer or less safe. There's a lot to learn from each other. Like there's a lot of knowledge out there about how to eliminate central line-associated bloodstream infections, but that doesn't mean every hospital has that down to zero. There seems to be a lot of intent. That intent may flow down to being variation in the approach or the diligence that people are using, and by people I mean the organization.

Speaker 3:

I'm not trying to fault try to use my words carefully but I don't think we've seen uniformly distributed improvement. Even if I were to look here in the Cincinnati Ohio area or the Dallas Texas area, where I've spent a lot of time, we don't have, you know, uniform improvement. I mean, I have more questions and answers about. You know, once something's been proven possible over here, why is that not duplicated more quickly across other organizations? I think a lot of it comes down to, you know, differences in leadership styles. I think if some organizations are still stubbornly very top-down command-and-control environments where people fear speaking up or fear admitting that there's a problem, we're not going to have as much problem solving right. So I think there are culture differences across different organizations that might explain, you know, a lot of the variation. You know why do we see great improvement in some organizations where we see less improvement in others? I'd be curious your thoughts and expertise around that.

Speaker 2:

I'd say it's exactly the same here in the uk you know, I've worked in the middle east as well and then say it's the same, it's differences in leadership. It's that that challenge of, you know, spreading things. You know the cultural challenges of of spreading things, resistance to change, but at the same time and I think you know this is where you know, maybe you and I think you know this is where you know maybe you and I have a real shared interest. You know, the commentary from the paper from 2023 was you know, yes, it's leadership, absolutely leadership has to drive this, but it's measurement. You know how we understand, you know what the systems are actually doing in terms of performing. Performing, and I think you know from where I've been in my experience, is that, despite all of this sort of advice and guidance that we should be measuring things regularly, is that we're still struggling, mark, and you know I don't know whether that's experience for you or whether you've got examples of where you know measurement's really driven improvement really really well.

Speaker 3:

That measurement is, you know, of course, critically important. But I think some organizations put a lot of focus into refining the accuracy of their measurements. And I think the real challenge and I think this is an area where process behavior charts can be really helpful is better understanding the cause and effect relationship between things we are doing in an attempt to improve the system and our understanding of whether those interventions or changes have really dramatically improved things. And healthcare is very complex because we might be making one intentional change to the system and there are many other changes happening. It's hard to really directly prove cause and effect. But I think what we can do better about is, you know, not getting too excited about. You know, we make an intervention and then that first data point, post-intervention, is a little better than the last data point and we might be tempted to say, well, look, the harm rate fell 20% compared to the previous period and we declare victory. And then the next period it goes back up 18%. And then people, maybe unfortunately, blame the people working in that system of well, why are you backsliding? Or there's different words that are used.

Speaker 3:

I'm like well, maybe the intervention didn't have a statistically meaningful effect at all and we need to go and try something different, and that's not meant to blame or punish the people who tried something where they had a solid hypothesis where, if we do these things, I think we can reduce infection rates by 90%.

Speaker 3:

Let's go try and test that hypothesis. We shouldn't punish people for trying, but we also, I think we need to understand when we need to think about adjusting in the name of PDSA cycles. If we tried something and it didn't drive the improvement that we predicted, we need to understand why Do we need to adjust that approach? Do we need to try something else? Maybe that new approach wasn't really adopted, because sometimes communication and training is an issue. Leaders think they've communicated a new process and maybe the training and support to the frontline staff hasn't been as solid as it could be. So I think there's a lot of places where things could get off track between good idea and maybe not having the impact we expected. But I think there's a lot of opportunity, I think, to be a little bit more scientific in how we evaluate our attempts to improve.

Speaker 2:

No, absolutely Completely agree. I guess I'm a big fan of um, statistical process, control, process, behavior charts. However we describe them, I see them produced a lot. I don't see the the inside that you've just described, necessarily, you know, really regularly used, and it almost feels like we create the chart and we've ticked the box, the charts on the uh, the report, rather than actually sitting down and saying what does that chart now tell me? What's the variation? Is it common cause? Is it special cause? Is it meeting targets? How do we unpack this? Because, because it feels to me it's complex, but it's not complex. And who do we need to influence? Is it leaders? Is it middle managers? Is it the front line? How do we build those capabilities within organizations to help them really use these charts?

Speaker 3:

Yeah, I mean that's a really important distinction that you're making, john, creating the chart and using the chart. The math is simple, it's arithmetic, it's math anybody could do. Anybody can create a process behavior chart. You can do it by hand, you can do it in Excel, you can do it in specialized software. But the math is easy. Changing the mindsets can be a lot more difficult.

Speaker 3:

So if we create a technically correct SPC chart with properly calculated limits and a good number of baseline data points and leaders are still demanding root cause analysis for a data point, that's a little bit worse than average and worse than the previous time period, we're really no better off. Or you know people if they're creating their own rules of thumb. If you know, because you talk about targets, that's a different dimension, you know, to the calculated limits on an SPC chart. But I could see a scenario where people are kind of overlaying a goal and there's common cause variation that occasionally brings the measure into what some organizations color code is green we're now better than target and then things become worse than target or they become red and they're overreacting. Either way, that common cause variation could be bringing us from green to red, to red again in statistics Like well, if there are two reds, if you miss target twice in a row, you need to do a root cause analysis. And, as a status, I'm not a statistician but as somebody who has studied statistics and has been taught statistics, you know that's a waste of time, it's nonsense. You know there is no root cause if it's demonstrated to be common cause variation in SPC chart. And then there are opportunities that people miss where we're still better than target, but it might be a statistical signal that things have gotten worse in a way that we should react to. So there's these old habits of, I think, trying to explain every data point. That's a habit maybe worth giving up, and the old habit of only reacting to this binary better than target, worse than target each time period. And so those habits are, I think, really tough to break.

Speaker 3:

I think a lot of times leaders don't see a problem in the way they're behaving Because now you know if we're fluctuating around an average with common cause variation, you have this dynamic where are now worse than target, and leaders give a motivational speech, or they get upset and they yell or they could do all sorts of things, and then the next data point is better than Target. And now the leaders convince themselves I'm being a good leader, I did these things and we're now beating Target. And then maybe they celebrate. And then common cause variation comes back in and performance drops. And now it's red and the leader convinces himself. Well, every time I praise people, performance gets worse. So I'm going to stop doing that, you know, but it's all. It's not an accurate description of the cause and effect of what's happening and a lot of times leaders, they have these habits. They think what they're doing is good leadership, and you know. But back to you know to be positive for a minute.

Speaker 3:

I've seen executives start to understand the mindset.

Speaker 3:

They might not understand the math.

Speaker 3:

They have people to do that for them. They can do the math, but it's more important that they have the mindset of learning when to look for a root cause, because there's a special cause in the chart to not overreact to every up and down. There's a leader, a CEO at a health system in Ohio who I first taught him and others five or six years ago. Well, the CEO has become their best advocate for the mindset and that's been incredibly powerful, and I had a chance to go back and visit about two months ago and I'm looking forward to sharing more of their story. The CEO has allowed me to interview him and to interview some others when I want to write up some case studies because I think that's really the ideal situation where if the CEO has the mindset and can coach kind of on down that doesn't mean everybody magically gets it, but it's better than the situation where a local leader understands it and is trying to build that mindset and culture then the executives swoop in and maybe punish somebody for common cause variation.

Speaker 2:

We wouldn't want that so just talking about that, that ceo, and I think I think there is something we're missing in healthcare about how we we support our leadership to really understand SPC is that I think we've spent so long teaching the frontline SPC but maybe not the leadership. So what was the trigger for that CEO to really engage with SPC charts?

Speaker 3:

That's a great question. I should ask that in a follow-up interview, because this organization has a team they call Process Excellence. You know it's their internal people that have a mix of clinical and engineering backgrounds and they're applying healthcare quality methodologies. But when they brought me in to do a workshop it was at an all-leadership meeting that they do four times a year and the CEO, to his credit, was there and was participating. It was paying attention. I mean, so often executives sort of come in and give the royal blessing of like, okay, thou shalt go learn, and then they leave and go to a meeting somewhere. The CEO was there, he was learning. I mean, I think there's no substitute for the experiential learning of the red bead experiment, which I love facilitating, and that exercise, as silly as it might seem, really builds, I think, new intuition for people. It really opens people's eyes. I mean I think probably a decade ago different organization this was the chief medical officer from California, from a big academic medical center, and he sat in the front row. I wish he'd played the red bead game, but he was in the front row and he was watching and he was really paying attention. And we're asking people for their observations and reflections after the red bead game and I showed them an SPC chart and look, the number of red beads is nothing but common cause variation, it fluctuates around an average. And he put his hand up and he said you know, this has got me thinking. I think almost all of our patient safety metrics are red beats. Like he had really learned something there. You know, through that exercise and you know, I think it's powerful to go back and again, not just to create charts for the sake of creating charts, but to deepen our understanding of our system.

Speaker 3:

And coming back to Ohio, the CEO shared a story. I mean his one quote quite literally, he said I think these charts are helping us save lives. And he was sharing an example of trying to reduce sepsis deaths. And they had tried a lot of interventions and countermeasures and systemic changes and the process, behavior charts showed the truth, the uncomfortable truth, that the death rate from sepsis continued fluctuating around a stable average. And so it was through no lack of effort.

Speaker 3:

But they, I think in a non-blaming, non-judgmental way, realized we need to adjust our approach, we need to go try something else. And they challenged themselves and their team to go and find other countermeasures, and I don't know what those countermeasures were, but they kept at it and they found some approaches that really dramatically reduced sepsis rates. I think it was at least 50% I don't have the exact number in front of me. So now we'd say there is a signal in the control chart but who cares about the control chart?

Speaker 3:

There's a statistically meaningful and sustained shift or reduction in those sepsis deaths and you know, 50% reduction is a great start no offense to the people who reduced it by 50% and I think the charts. When we connect it to our understanding of what we're doing, what we think the cause, what's the action, what's the result, we can better understand that. I think it really helps drive improvement when, again, if we just created pretty charts, there might be a risk that we weren't using them in a meaningful way or we weren't driving improvement that way no, absolutely, I think in the uk.

Speaker 2:

I think we're in a really interesting situation in that we've because we're in a one nhs, one national health system is that we've actually now got a mandate that uh organizations will use process behavior charts or statistical process control, however we describe it. So all of our reports at board level now have SPC charts. You know whether they're used well or not. It's probably a another podcast there to actually discuss it is. Is there something similar in the US? Or you, you know you'd got Berwick, you know, 30 years ago nearly saying the importance of plotting data over time, and yet are we seeing that in the US? Or is there something that we need to think about in how we support them?

Speaker 3:

I mean sadly, the general answer would be no. In my travels I'm always pleasantly surprised to see any form of control chart anywhere. So back to this Ohio health system. Then we step back and think more broadly. Within that health system, the CEO has done what I hear you describing. He has mandated the use of control charts. He doesn't want to see data presented to him, brought to him in meetings. There's a mandate but it comes with a lot of support and I know there's a lot of support and great publications that I've been exposed to from the NHS about how to create the charts and I know there's an attempt to teach people how to use them. I respect what people are doing there, but it's a big difference between a small organization and a national health system caddy brings me on to some of the language we use.

Speaker 2:

Mark around these, these charts, is that you know we talk about shoehard charts, control charts, statistical process control. You describe them as process behavior charts, which you know there's a bit of slightly different thinking and I think you know and again, no criticism to hi hi at all because of the amount of influence that they've had. But you know we're encouraged to actually look at statistical process control charts in health care, maybe five to seven different types of charts, each with a different calculation, each with a different sort of, you know, maybe, way of thinking in terms of the underlying statistics. And yet you talk about process behavior charts, which is a little bit more, slightly different in terms of it's maybe one chart. It's very much in line with the thinking of, you know, a very great statistician in terms of donald wheeler. Could you speak to some of that in terms of some of that difference?

Speaker 3:

yeah, I sure can, and I was trying hard not to interrupt, but I'm glad you mentioned dr wheeler's name because I almost interrupted to say he deserves credit for that term.

Speaker 2:

Totally, absolutely.

Speaker 3:

I've been indirectly a student of Dr Wheeler's. I first read his book Understanding Variation well over 25 years ago. I was very young. In my career I've gotten to take a class with Dr Wheeler. He wrote the foreword for my book, measures of Success, which I greatly appreciate him doing that.

Speaker 3:

So for one like around terminology, you know, process behavior chart, you know, just to say it in reverse, it's a chart that helps us understand the behavior of a process. We could say a system behavior chart and then we could do a whole podcast about what's the difference between a process and a system and a different thing to dig into. But I think statistical process control, there's a different goal and a different motivation. So when I started my career in manufacturing at General Motors, as you mentioned at the beginning, this was at a engine plant cutting metal and forming parts that had a number of holes that were ideally a very precise size in diameter, plus or minus some permissible specification where the engine would still work well, but better if you could control that process. To reduce variation so that the difference from engine to engine would be almost impossible to measure would be the ideal. So you're looking for stability and precision and a never-changing number until the engine gets redesigned. If we're looking at a process, like all of the processes in healthcare, we're either always driving towards zero or 100%. We don't want to control that process, to have the exact same number of patients getting MRSA infections every month. We're trying to drive these downward towards zero. And then we could do a whole other discussion about should we set a goal of zero? Is that motivating or is that unrealistic? Different topic. But we're not trying to control the process and I don't like the word control because we're not trying to control people. We're trying to enable them and unleash them in their improvement work so that we can continually drive the process to be better. So that's what I would say about terminology. Then we would get to methodology.

Speaker 3:

Don Wheeler, one thing I've learned from him is going back to Walter Schuhart, that of all of the different types of control charts, the XMR chart, x being the data, the MR being the moving ranges and see now that it gets complicated, what do you mean? We have two charts for each set of data, maybe, but the XMR chart methodology was quite literally the latest and greatest invention of Walter Schuhart. That, as far as control charts go, that it is state-of-the-art technology, even though it's quite old at this point I think the XMR chart's maybe 80 years old that it covers so many different situations where the underlying statistical distributions of the data just literally don't matter. It's such a robust methodology that we can use and we can teach just one methodology and having this really complicated flow chart of when should we use different control charts in different situations.

Speaker 3:

A lot of people don't have patience for learning that, and then some of these control charts, I think, get quite hard and difficult to interpret when the limits are constantly moving up and down based on the end for the certain time period. And using the XMR chart, thankfully, is a simplification that's not inappropriately oversimplifying things. It's like it's not lazy, it's not oh, because it's easier. There's, I think, a compelling argument that the XMR chart is actually better suited to the kind of messy real-world data that we have in organizations, including healthcare Can.

Speaker 1:

I just ask a quick question on there, going back to something you mentioned earlier with manufacturing too. So we've been working with healthcare organizations for quite a while with control charts, spc charts, as you've both been discussing, and more recently we're having conversations with manufacturing organizations and other industries who, as you've both mentioned, again using control charts, but in a slightly different way, and one of the key differences from my taking and me and john have discussed this as well is the use of capability analysis, cpk, ppk. And a lot of organizations I come across almost say we are not using a control chart unless there's CPK in the chart, because CPK is everything, or PPK, etc. So I guess my question is is there room? Can healthcare adopt capability analysis, or is that only something which, in your experience, could be used in manufacturing?

Speaker 3:

I think conceptually it's important in healthcare because now we're talking about the two dimensions of where our target is, where we want the data to be. The variation has gotten to be very low and this could be something. Let's say, you know, a patient experience survey, as they're called in the United States. You know you might say, okay, well, ideally maybe there's a world where 100% of patients are satisfied, but maybe some of them come in seeking drugs that are inappropriate and you know it's different than other industries. Again, we're not selling cars, we're not selling coffee. But let's say there's a goal of 90% patient experience scores and month to month, if that's how it's being charted, let's say the lower control limit is 93 and the upper control limit is 97 and the average is 95. The cpk I haven't done the math in a while, but conceptually the cpk would be pretty high, like we could feel very confident that that measure is always going to remain above target unless the system changes. And the spc chart will tell us if the system has changed, because we'll see a statistical signal whether it's gotten better or whether it's gotten worse. And then there's a question of well, should we raise the target?

Speaker 3:

Sometimes uncomfortable Deming point of well, these targets are arbitrary.

Speaker 3:

You know, if we had set the target to be 95, now suddenly you know if we had set the target to be 95 now, suddenly that process is no longer as capable like there might be.

Speaker 3:

There is less of a scientific basis for how that target is set, where the engineers who design the car engine like there is a very scientific basis to why the size of that hole should be what it is and why that the variation needs to be within a certain range, because if the hole's too big the engine blows up one way and if the hole's too small the engine blows up a different way. There's some science to that. Where a lot of targets set by management or by the government are, I think, by definition, arbitrary, that doesn't mean they're bad. But I wonder how much time do people spend debating whether the goal should be 92.37? Why are we dealing with so many significant digits? It seems insignificant. So we have measurement, then we have targets and we're like well, if measurement automatically led to improvement, this would be easier. And if setting a target automatically led to improvement, things would be even easier you know, but we know it doesn't work that way.

Speaker 3:

I think amongst us we'd agree.

Speaker 2:

Yeah, absolutely. Can I go back to the conversation about process behavior charts? That whole thing about capability, I think, is just a fascinating conversation. So I've been a fan of control charts SPC 20 years now and it's taken me 20 years to really understand it because I didn't take time to study it. And you know, I spent the best part of the last several years trying to teach control charts to people who don't have a strong background in statistics. You know most of them. The last time they did any form of maths mark was, you know, when they're at high school. And they still struggle to understand the difference between a mean and median, the mode. And yet we're asking them to actually start thinking about this.

Speaker 2:

And I think there's something in what you said about the process behavior chart in terms of the xml chart, versus trying to teach them five to seven different charts where, oh, and, by the way, if you do a p chart, can you assure yourself that the probability in each time period is the same for each defect and you know have you got the right sample size?

Speaker 2:

And likewise, you chart and you know, by the way, these one will come from binomial distribution and versus a Poisson distribution, whereas the XMR chart doesn't care about the distribution, doesn't care about the underlying. You know what's going on beneath it. It's just a simple, straightforward chart with, you know, five to seven rules, depending on which set of rules that you actually use. Should we just be simplifying it and actually just helping people get really good at using an xmr chart, rather than trying to overwhelm them with all of these decisions of you know? It's counter continuous data, it's attribute data and you need to be using this chart, this, this chart. But, by the way, your assumptions are wrong. Shouldn't we just simplify it and just give them a really good chart that's robust and actually helps them make good decisions?

Speaker 3:

Yeah, that's the short answer, but the longer answer is I think we also have to make sure they're learning the mindset right. So if I were to go into an organization and they're using C charts and P charts and different charts, and they have the mindset there like that might be 90% of the battle, or like we're already on the same, we're fighting I don't want this to be an inappropriate analogy, but like we're fighting on the same side of the war, we're allies, but you have a slightly different weapon than I do we could still fight together, we could still be on the same team instead of fighting each other. I think it's easier, though and again, I think better to teach the one type of chart.

Speaker 3:

When I've taught these workshops, there's always a wide range of people in the room, of those who, like you said, john, haven't thought about statistics or arithmetic in a very long time. And then there's people with statistics degrees and they're a Six Sigma master, black belt, and nobody's really ever picked a fight about it. But people are skeptical. They're like, well, this is different than what I was taught. It's good to be skeptical, let's talk through these things, and I remember, after one workshop, somebody who was a Six Sigma master black belt came up to me and he said, basically, I'm so thankful that I can teach this one methodology, because when I try to teach my leaders, they stop listening by the time I'm saying well, now the second type of chart is that they don't have the patience for that. So let's teach one type of chart, whether they learn the math or not, let's teach the mindsets.

Speaker 3:

Because you're saying, john, the rules for detecting a signal are the same. We could have a long discussion about which rules and why, but the main rules that I think are unarguable are any data point outside the limits, how many data points? Seven, eight different numbers people throw around of how many consecutive data points on the same side of the average. And then, beyond that, there's other rules that are incrementally helpful. But even if we just, I think, unlearn the lesson of trying to explain every up and down, of trying to explain every red and instead, you know, kind of just reacting to signals, is one helpful thing to start doing.

Speaker 3:

And then the second thing is that when we have a performance measure that's just stubbornly fluctuating around an average, we need to improve the system right and stop the habit of reacting and saying why was last month a little worse than the month before. Stop asking that question and instead we can ask back to Oliver's process capability question why is this process generally not capable of the performance level that we need? Process generally not capable of the performance level that we need? That leads to a different type of problem solving where we can do systemic problem solving using the IHI model for improvement. We could use A3 problem solving as Toyota might call it. We're using systemic problem solving and improvement approaches that are decidedly less reactive than I think the typical way of managing would be.

Speaker 2:

I think there's a real opportunity there for us to help leaders. They're busy, they're overwhelmed with information and, I think, to reduce the amount of things that they need to look at, that they need to think about, and they've got one chart, maybe with some of the thinking in and around sort of process capability. I think we've got the opportunity to sort of shift the conversation, shift the dial however we want to describe it and really, you know, help leadership, engage with this well, and donald wheeler was leading the way.

Speaker 3:

Um, he's written large, dense statistical textbooks, but his book Understanding Variation, designed for business leaders, is a pretty small, pretty short book. I should have followed his lead a little bit better with my book. When he decided to write the foreword, maybe his feedback should have been like hey, this book's way too long. That might have been helpful feedback, mark.

Speaker 2:

I've got considerably longer books on statistical process control than yours.

Speaker 3:

But hey, continuous improvement. I've gotten some good feedback from people that the book could have been half as long. I'm like, okay, well then read that first half of it and go get to work.

Speaker 2:

I think it's a great book. I enjoyed reading it. So, no, it's a great contribution to the use of measurements. So, uh, I wouldn't worry about the length of it well, see, there's economics here.

Speaker 3:

The book would have been cheaper, which means maybe more people will buy it. More people use it and I, but podcasts are free, you know so yeah, yeah, no, absolutely absolutely excellent.

Speaker 1:

Well, um, just coming on time there, we've pushed it right to the limits there, so so thank you both for taking your time, this morning for mark and this evening for john for joining us today for this podcast. I really enjoyed the conversation and, as mark said there, that do go forward and listen on to some of his podcasts that he's got there, and I think he's got a couple of books out there as well. Do have a look at them too.

Speaker 2:

Yeah, mark, thanks so much for your time. That was really insightful. Really appreciate it, oh.

Speaker 3:

Oh well, thank you. Really enjoyed being able to talk about these things and looking forward to talking more sometime.

Speaker 2:

Absolutely Look forward to it.