Monthly Archives: May 2011

June is a busy month for us, here is a brief synopsis of what is up:

June 1 – The Ridgetop Group and Ops A La Carte combne forces to present a FREE Webinar entitled How to Use HALT with Prognostics. Register here for this event.

June 6 is the MD&M East. You can register at MD&M We are going to give a seminar titled Medical Device Reliability Testing.

June 7-9 is the Applied Reliability Symposium (ARS). Ops A La Carte will be making two presentations – What Is DfR and What Is Not DfR and 50 Ways to Improve Product Reliability. Both are well worth attending. You can register at: ARS

June 16 is our FREE Open House from 11:00 am-3:00 pm Pacific time in Santa Clara, CA. IEEE CPMT is co-sponsoring the event with us. The event will highlight our lab expansion with new capabilities. We’ll have a look at our newest piece of equipment, the Chart HALT Chamber. As well, this will be a 10th Anniversary celebration for us. Register with Cathy Dols at Cathy.Dols@chartindustries.com

June 20-23 – IEEE Prognostics and Health Management (PHM) Conference. We will be giving a presentation – How to Use HALT in Conjunction with Prognostics You can register at PHM

Reliability Hotwire, a  Reliasoft monthly eMagazine for reliability professionals, published two papers recently entitled:

Taguchi Robust Design for Product Improvement” http://www.weibull.com/hotwire/issue122/index.htm ,

and Taguchi Robust Design for Product Improvement: Part II http://www.weibull.com/hotwire/issue123/index.htm

I read the first paper in issue 122  (post-publication)  and found many issues  and discrepancies.  I had worked with Dr. Taguchi directly for approximately 10 years while I  managed the  Xerox Robust Engineering Center.  I had numerous other interactions through Taguchi’s  affiliation with American Supplier Institute in Detroit .  I responded to Reliasoft and the author of the paper regarding the first publication, but no modifications were made .   I  was then given a pre-read to the second paper and several  general  modifications were made prior to publication.  Below are my responses to the first publication and  pre-read response to  the second publication.

Chris,

I just wanted to send you a few comments about ‘Taguchi Robust Design for Product Improvement’  paper published in issue 122 of Reliability Hotwire. The stated objective of the experiment  was to  find the appropriate control factor levels in the  design.  Dr. Taguchi  told me on numerous occasions that the primary  objective  of  parameter design was to prevent a poorly understood or  inadequate process design from going downstream (and creating lots of  costs and trouble).   Parameter design experiments  should be used to inspect engineering knowledge and process downstream readiness.  It is to verify that the engineers can  prevent surprises downstream. It is to verify that they can make the process do what they want, when they want it.   If an engineering team runs an experimental  design, such as the one shown, and they know very little about the important process parameters, they will still collect data.  They will still analyze the data, and try to make inferences and decisions based on their results. They will not , however, be  providing any protection for downstream enterprise.  If an engineering team selects several noise factors and levels which do very little to affect the function, they will still collect and  analyze data, and try to make inferences.  They will however not be providing  sufficient   protection for the actual   process noises which will come in downstream conditions.   If the engineering team selects a response  and measurement system with serious limitations, they will still collect and analyze data and try to make inferences.  The response and measurement system may have any number of serious problems, like lack of engineering focus,  ambiguity,  lack of validity,  nonlinearity,  large errors, …   The engineering team will still collect data and do their thing.   Unfortunately, the experimental results  will be of little value  and later on, downstream people will tell you it was [of little value].

 

Inspection of the data in the paper reveals that the numbers are all about the same from one noise factor level to another.  In other words,  the noise factors create no systematic contrast between levels.  Noise factors and levels selected have very little effect, one indication of a poorly understood design. A good engineering team could easily come up with some potent noise factors they will have to worry about downstream .   The signal-to -noise ratios show only a ~1 dB range among all eight  experiments. This is quite small.      The mean and standard deviation data show a bothersome trend where the  higher means have the lower standard deviations.  This is probably due to the saturation of the data as numbers approach 100 on the gloss meter scale.  Logistic transform of the data is usually done for 0 to 100 scales, as in percentages, to minimize  the rail condition.

 

Dr. Taguchi would  probably  have  prescribed either  an L9 (3 4-2) inner array or perhaps an L12 inner array with additional  factors assigned.  Assignment of  control factor (CxC) interactions  was usually discouraged.  It was left to the engineering team to use their understanding of the process to assign factors appropriately.   For example, most engineers would know that paint is a shear thinning  fluid,  i.e. the viscosity drops as the flow rate increases. By not using this fact during the assignment, they would probably observe  an  interaction effect between those two factors. By adjusting the flow rate levels , depending on the viscosity level, the interaction could be avoided.  This was called sliding scale assignment.  They could demonstrate their engineering knowledge by appropriate assignment

Below is a simple graph showing an  interaction between two control factors A and B .  The effects of factor A  makes Y increase,  for example, when B is at level one.  The effect of factor A makes Y decrease, for example,  when B is at level two.  This means that factor A effect cannot be relied  on to make the response always  increase (or decrease) .  Sometimes Y increases, and sometimes Y decreases, depending on what the other factor(s)  are doing.  Remember that there are lots of factors not assigned to the experiment as well.  If  the effects of an assigned  factors like factor A are different depending  on what one or many other factors may be doing, that makes an unreliable effect.  We would prefer factor effect which will   always move Y  in the same direction   In physics, for example, consider Newton’s second law, sometimes written as F=mA. Increasing the mass always increase the force.  It is not as if sometimes a mass increase creates a larger  force and other times the mass increase creates a smaller force.  You can rely on the force always getting larger as the mass increases. Similarly, you can rely on the force always getting larger as the acceleration increases.  An engineering team that finds lots of control factor antisynergistic  interactions, does not understand the design very well and should not move it to downstream conditions.

 

Parameter design verification test was always conducted by Dr. Taguchi  to see if the results were reproducible.  In the example shown,  there was no verification test, only a  final linear model was built from the regression analysis.  The final control factor combination, to maximize the S/N ratio,  was not checked for reproducibility.  Verification tests enable consolidation of robustness gains so that the next experiment starts from a better place.  It also provides new data  showing  the engineering team can create reproducible results using their knowledge of the process.  Verification test should  used to demonstrate that  gloss can be improved, not just provide an equation.

 

Larger-the-better and smaller-the-better S/N ratios are usually  used together, to develop a more positive operating window.  Smaller-the-better and larger-the-better  S/N ratios  were used early on in Taguchi’s career when measurements were frequently made on dysfunctional outputs (ideally zero). Defects of  spray painting  like orange peel, sags, pinholes, blisters, etc.  would have been treated with smaller the better S/N ratios.  Now the robustness  effort is to work on the functions rather than dysfunctions if possible

Spray painting ideal function  development with signal factors related to changing droplet kinetic energy (mass and velocity) would probably be the preferred approach today given time and resources.  I would first  try experimentally  to consistently and repetitively create the same drop volume  and velocity  (by changing lots of control and noise factors).  A very narrow distribution  of drop volumes  and  a very narrow velocity distribution  would be preferred. Tuning factors would be identified for changing drop volume and drop velocity.   Subsequent  targeting, wetting and devolatilization experiments would  be developed followed by  curing process steps optimization.  Notice the decomposition of the gloss problem into upstream  time segments—generate the drop,  propel the drop,  deliver the drop to the  surface,  remove the solvent from the drop,  cure the drops together… Each step would be aimed at minimizing variation of, (without measuring), the gloss.

 

Reply to part II : Chris,  Interactions  ( CxN ) between control factors (C)  and noise factors ( N ) can be used to help with robustness improvement.  If one were to use random numbers as data, however, (CxN)  interactions could easily be observed.  Discovering interactions means very little if it cannot be used to improve the design.  Verification test need to be conducted to confirm the gain predicted by taking advantage of CxN interactions.  If it is not done, it’s just a mathematical exercise.

 

What is the relationship between mean and standard deviation for a design?  Is it correct to assume that they are independent? Is your picture correct?   Most times when the mean output is zero, the standard deviation is quite small.  As the mean output increases, the standard deviation also  increases.  Treating the mean and standard deviation separately sounds enticing, but  it ignores the reality.  One side effect of treating them independently  is to make the device or design work very inefficiently.  It drives the design to small variation by driving the output to small levels. The objective should be to maximize the ratio of the useful output to the harmful output, not drive both to zero.  Occasionally the mean will increase and the standard deviation will decrease.  This is referred to as running into the rail.  It may be a measurement system  limitation, as I mentioned earlier.

One  way  to increase the output response  is to  increase the power/energy   into the design or process.  As the power increases, however,   lots of detrimental effects can be observed.   Temperature changes ,  chemical reactions rate changes, optical side effects occur,  mechanical problems like vibration amplitude increase, … output  becomes  more variable.   To find a way to use the power to just create more useful output  and to starve (of energy) the side effects is the job of parameter design.  For the current process, the flow rate of the paint and the pressure drop in the gun are control factors that affect the energy of the droplets long before they strike the target .  It may well be that most of the paint is going somewhere other than the target creating a much thinner coating , but slightly higher gloss.  An inefficient. costly process is one that throws paint everywhere but the target,  yet maybe meets the gloss spec.   A better process would be one that delivered paint with correct drop volume and velocity and  placed  droplets  where  they were  supposed to go with the correct incremental  thickness.  Gloss is more a function of the substrate surface wetting and roughness characteristics  and  rheological/ devolatilization  behavior of the paint  after deposition.

Your paper  is mostly devoid of any engineering consideration.  The focus is on  what to do with the data (whether or not  it has any meaning).   In all the years I worked with Dr Taguchi, the focus was always on the engineering, design decomposition,  the measurement improvements, the verification testing , and the gains made by running a well planned engineering experiment.  I understand why you have included response surface approach and reintroduced some of the approaches suggested  by other statisticians many years ago .

I need to know if    this is  real data or just made up ?

Your  conclusions suggest a single array for both noise and control factors.  This has been discussed elsewhere in great detail.  It is not parameter design .  The layout of an experiment is usually set by the constraints of time and money. Those are starting points for the design.  There are many ways to adjust the size of an experiment:  Compounding noise factors,  assignment of  only important noise factor(s),  use only signal factor,  loose tolerance on setting factor levels,  augmenting earlier design,  using difficult to change factors in slowly changing columns, orthogonal array selection,… .

As it turned out the author indicated that the data were indeed  fabricated.  I was offered the opportunity to provide a future paper to help engineers/emagazine readers understand Taguchi’s  robust design methodology more  accurately.

 

Louis LaVallee

Sr Reliability Consultant

Ops a la carte

One objective of working in reliability is to minimize Life Cycle Costs (LCC). In order to do this a Reliability Engineer must select which reliability tools will need to be used and then utilize them properly for the product life cycle.

As well he needs to be on top of the information that is generated to be certain it is utilized properly during the testing phase.

To learn more about Reliability Integration and the tools used visit our site to see our reliability training courses, in particular overall reliability which we offer.

Chart Product Manager, Dan Strom and Ops A La Carte Managing Partner, Mike Silverman are happy to announce a joint alliance between their two companies. The event will include the installation of a Chart REAL-36 HALT chamber. The event will be held at HALT and HASS Labs, the Ops A La Carte testing facility in Santa Clara, CA.

Join us on June 16, 2011 from 11 a.m. to 3 p.m., for a BBQ lunch, followed by technical presentations showcasing our expertise and demonstrating how we combine our world-renowned expertise to offer customers a complete solution.

Email Mike Silverman for more information.

Continuous Improvement (CI) is a driving force in business.  However, how do we go about obtaining such from our Suppliers?

What do we, as customers, do to define such to our suppliers – the expectations of improved quality, delivery service, and cost reduction?

What is done to aide the supplier in the pursuit of CI?  What steps can we take to ensure we are the track to improving our performance via compliance of our suppliers?

Ops A La Carte is pleased to offer free training on many of the topics that are most important to any reliability effort. We hold a free webinar on the first Wednesday of every month. We cover many topics which you should find helpful and an excellent introduction to our company, team and services.

To learn more about all of our course, and to register, please visit the Ops A La Carte Reliability Course List and see what suits your needs best.

We combine our In-House courses to most closely meet our clients’ needs and goals and will cover everything from theory, principals and application with hands on workshops and lectures.

Also visit our main site for more information on all of our reliability training materials.

Ops A La Carte Turns 10 years Old

On May 1st, 2011, Ops A La Carte officially turns 10 years old.

Over the past 10 years, Ops A La Carte has grown from an initial idea to a global consulting firm serving customers in well over 100 industries and 30 different countries.

We would like to thank everyone we have worked with over the years for your patronage. Without our clients we could not of made it this far.