时间:2024-05-22
Wende Tian,Guixin Zhang,Xiang Zhang,Yuxi Dong
College of Chemical Engineering,Qingdao University of Science&Technology,Qingdao 266042,China
Keywords:Alarm threshold Chemical process PCA Johnson transformation Variable weight
ABSTRACT To alleviate the heavy load of massive alarm on operators,alarm threshold in chemical processes was optimized with principal component analysis(PCA)weight and Johnson transformation in this paper.First,few variables that have high PCA weight factors are chosen as key variables.Given a total alarm frequency to these variables initially,the allowed alarm number for each variable is determined according to their sampling time and weight factors.Their alarm threshold and then control limit percentage are determined successively.The control limit percentage of non-key variables is determined with 3σ method alternatively.Second,raw data are transformed into normal distribution data with Johnson function for all variables before updating their alarm thresholds via inverse transformation of obtained control limit percentage.Alarm thresholds are optimized by iterating this process until the calculated alarm frequency reaches standard level(normally one alarm per minute).Finally,variables and their alarmthresholds are visualized in parallel coordinate to depict their variation trends concisely and clearly.Case studies on a simulated industrial atmospheric-vacuum crude distillation demonstrate that the proposed alarm threshold optimization strategy can effectively reduce false alarm rate in chemical processes.
Alarm management is of significance in guaranteeing the safety and reliability of chemical process.According to the statistics of Engineering Equipment and Materials Users' Association(EEMUA),the alarm number that an operator can effectively handle is about 150 per day(one alarm every 10 min),with maximum number of about 300 per day(one alarm every 5 min)[1],but in practice,the alarm number arising in large-scale plant often exceeds this upper limit,leading to heavy attention load on operators.Most of these alarms are false alarms caused by unreasonable setting of alarm threshold[2,3].They not only distract operator's attention on key operations,but also conceal some plant safety related key alarm signals.Therefore,a proper alarm threshold for each measurable variable is important for operators to correctly deal with abnormal events and continuously maintain safe process operations.
Current threshold setting methods for process variables can be classified into three general categories:model based method,knowledge based method,and process history based method.Although process history based method is not well suitable for real-time alarm management due to its high-capacity historical data requirement,its implementation is easy and efficient compared with the model based[4–10]and knowledge based[11–17]methods.With massive data from DCS,this method has been widely used in process control field.For example,based on Bernoulli's data distribution,Chen[18]developed two-stage control limits of Q and T2statistics in multivariate statistical process to reduce false alarm rate.Zhao et al.[19]proposed a data filter based automatic alarm suppression strategy to deal with repeated alarms.Tian et al.[20]combined dead area with data filter to effectively prevent occurrence of repeated alarms.Xiao et al.[21]grouped and ranked alarms with multivariable correlation analysis to reduce alarm number.In addition to these multivariate statistical methods,optimization methods are also very popular with various object functions defined.For example,Han et al.[22]proposed a multi-variable alarm threshold optimization method by combining false alarm rate and missed alarm rate with correlation coefficient.Liu et al.[23]used kernel density estimation method to assess the state of process alarms based on historical data,and then employed numerical optimization method to solve assessment problem.Abdou et al.[24]optimized alarm thresholds in OS-CFAR system by using simulated annealing algorithm.Zang et al.[25]proposed a multidimensional kernel density estimation approach to optimize alarm thresholds with FAR and MAR as objective function.
Although above methods can optimize threshold to a certain extent,their analysis processes are relatively complex and did not consider the importance degree of variables.Meanwhile,raw variable data are presupposed as normal distribution,inevitably leading to some calculation errors.Motivated by inequable contribution of variables on alarm system,this paper proposed a novel principal component analysis(PCA)weight and Johnson transformation based alarm threshold optimization method for chemical processes.The main contribution of this paper is to integrate PCA,Johnson transformation,and parallel coordinate closely although these techniques are all well developed,to let operators focus on few important alarms given by key variables when facing large amounts of alarm signals.In this method,PCA is used to analyze the importance of variables and determine variable weights,Johnson transformation is used to change raw data into normal distribution data to optimize thresholds of few variables with higher weights,and parallel coordinate is used to visualize these variables and their thresholds in a single coordinate.
In the following sections,the proposed threshold optimization scheme is described in detail at first.Then its implementation is illustrated with an industrial atmospheric-vacuum crude distillation example.The major conclusions reached from analysis of application results are given at last.
Fig.1 depicts the optimization process on alarm threshold,where the eigenvalue,variance contribution,and principal components of variable data are extracted by PCA method to calculate variable weight.Several variables with high normalized weight are selected as key variables.An alarm frequency lower than international standard is given initially to assign different alarm numbers to these key variables according to their weight.We use this alarm assignment for key variables and 3σ rule for other variables to give the initial variable threshold and control limit.Then variable data are transformed with Johnson function into approximate normal distribution data to update variable thresholds with obtained control limits.This process iterates until the final obtained false alarm rate and statistical alarm number satisfies international standard.
PCA uses feature analysis of covariance matrix to reduce data dimension while keeping data contribution on variance as much as possible.When there are large amounts of correlated variables in data set,independent principal component obtained with PCA can be used to represent variation structure of the original data set.
The weight of one variable refers to its relative important degree in the overall alarm evaluation structure.In this paper,PCA is used to determine variable weights with the following steps:
(1)The square root ratio of variable component matrix to principal component eigenvalue is calculated to get the coefficients of linear combination.
(2)The contribution of principal component variance is used to obtain weighted average of variable coefficients in the linear combination model and the comprehensive score model of principal components.
(3)Weight factors are finally obtained based on the normalized variable coefficients of comprehensive model.
Although many variables in chemical processes are deemed as normal distribution,the approximation effect is not always satisfactory.So,it is necessary to perform normal distribution transformation before deep investigation on data variables.As a simple and efficient method,Johnson transformation method was used in this work to implement the normal distribution transformation work for variable data.
For a set of specific non-normal or weak-normal variable data,the Johnson transformation steps are as follows[26]:
(1)Choose a suitable ‘z’.A suitable ‘z’is crucial for Johnson transformation to fit non-normal data properly and obtain optimal transformation.The ideal value range of z is s={z:z=0.25,0.26…1.25}with normality of transformed variable as criterion.
(2)Give corresponding distribution system for Johnson transformation.Available Johnson distribution systems are listed in Table 1.The corresponding distribution probability{p-sz,p-z,pz,psz}for{−sz,−z,z,sz}is given in standard normal table and the corresponding quantile{x-sz,x-z,xz,xsz}is determined according to sampling data.Then,the quantile ratio QR=mn/p2is calculated with m=xsz− xz,n=x−z−x−sz,p=xz− x−z.Finally,three
Table 1 Johnson distribution system
Johnson distribution types listed in Table 1 are distinguished using this ratio.The discriminant rules for SB,SLand SUare as follows:
If x is subject to SBdistribution,then mn/p2<1;
If x is subject to SLdistribution,then mn/p2=1;
If x is subject to SUdistribution,then mn/p2>1.
(3)Obtain transformation equations with appropriate parameters.The corresponding parameters are calculated as follows[26]:
For SBcurve:
If there have other cases rather than normal distribution,we cannot use the Gaussian function to perform this fitting work.An alternative is to collectenough data from stable system with some obviously bad data omitted,and then use other functions like Uniform function,Quartic function,etc.to fit sampling data.
Parallel coordinate is an important technique for information visualization[27,28].It uses a series of parallel axes to represent high dimensional data to prevent data running out of space and overcome expression difficulties of data with more than 3 dimensions in Cartesian coordinate system.It is very suitable for visual data analysis due to its projective geometrical interpretation and mathematical duality foundation.So in this paper,parallel coordinate was used to depict variables and their alarm thresholds after PCA and Johnson transformation.
Parallel coordinate maps a data attribute space with n dimensions to a plane with two dimensions by n equidistant parallel axes.Each axis represents an attribute dimension with an evenly distributed value ranging from the minimum to the maximum of corresponding attribute.Therefore,each data can be represented by a broken line over n parallel axes according to its attribute,resulting in similar linear trends for similar data.Figs.2 and 3 demonstrate a straight line in Cartesian coordinate system and corresponding parallel coordinate,respectively.Fig.4 demonstrates a line chart example with six dimensional parallel coordinates.
Fig.2.A straight line in Cartesian coordinate system.
Fig.3.A straight line in parallel coordinate.
Fig.4.A line chart of six dimensional parallel coordinates.
Fig.5.Flow chart of atmospheric-vacuum crude distillation.
The case studied in this paper is an industrial atmospheric-vacuum crude distillation example.In this example,11 variables with total 164 observations are retrieved with a sampling interval of 1 min,including 100 observations of normal data and 64 observations of abnormal data(the feed flow of atmospheric column bottom increases by 10%).Fig.5 depicts the flow chart of this industrial atmospheric-vacuum crude distillation and Table 2 lists the 11 measured variables.
Table 2 Part of measured variables in atmospheric-vacuum crude distillation
In this distillation process,the crude oil from tank area is divided into three streams before entering atmospheric tower(T-100).Part products leave atmospheric distillation from top or side point,and the rest flows into heating furnace(FH-102)from bottom and enters vacuum distillation(T-101).In T-101,the main products and by-products discharge from top,side,and bottom of tower,respectively.
First,it is necessary to perform PCA test to judge whether sampling data can meet PCA requirements.Kaiser–Meyer–Olkin(KMO)is a common indicator to compare simple correlation coefficients with partial correlation coefficients among variables.The sphericity test of Bartlett gives the statistical magnitude to test whether all variables are mutually independent.These two methods are usually used to test PCA applicability in practice.Table 3 gives the KMO test standards of PCA and Table 4 lists the test result of KMO and Bartlett.
It can be seen from Tables 3 and 4 that the suitable extent for PCA in this case is‘generally’and the significant probability in sphericity test ofBartlett is 0.000(smaller than 0.05).So PCA is appropriate to calculate variable weight in this case.
Table 3 Inspection standards of KMO
Table 4 Test results of KMO and Bartlett
Second,the PCA calculation on 11 variables with 100 observations of normal data is carried out in SPSS software.The total variance of PCA explanation in this case is listed in Table 5.
Table 5 Total variance of explanation in PCA
Principal components are selected according to the cumulative variance contribution rate in PCA.At present,there is no restrict criterion for dimensionality reduction of principal component analysis.In practice,one common way is to choose components with eigenvalue greater than 1 and cumulative contribution rate more than 80%as principle components.The principal variance of variables is used to calculate the total variance explanations.Then,the principle components are sorted according to their eigenvalues.As shown in Table 5,all eigenvalues of the first three components are greater than 1,so they are selected as principal components in the following analysis.As the cumulative variance contribution rate of these principal components reaches 86.656%(>80%),they can represent the main information of all variables.
The component matrix of PCA in this case is listed in Table 6.
Table 6 Component matrix of PCA
Finally,the variable X1 is taken as an example to explain weight determination process.The specific process is as follows:
(1)Determine the coefficients in linear combinations The coefficient of X1 in the first principal component linear combination is 0.966/6.8470.5=0.369 according to Tables 5 and 6,where 0.966 is the corresponding value of X1 in the first principal component and 6.847 is the eigenvalue of the first principal component.Similarly,the second one is 0.125 and the third one is 0.145.
(2)Determine the coefficients in comprehensive score model The coefficient of X1 in comprehensive score model is(0.369×62.242+0.125×15.16+0.145×9.254)/(62.242+15.16+9.254)=0.302,where 0.369,0.125,0.145 are coefficients of X1 in three linear combinations and 62.242,15.16,9.254 are eigenvalues of three principal components.
Other coefficients are calculated similarly.The final obtained coefficients are listed in Table 7.
In Table 7,the nonpositive correlation between variables and their principal components leads to the negative coefficients in comprehensive score model.Using variable X4 as an example,the value between X4 and the first principal component is−0.969 in Table 6,indicating a very strong negative correlation between them.Therefore,negative coefficient does not represent a small value,but a dominated negative correlation.So we should use the absolute values of coefficients in comprehensive score to calculate normalized weight.Table 8 lists the weights of these 11 variables.
Table 8 Weight of 11 variables
Fig.6 gives the variable weight histogram of Table 8,showing that the weight magnitude order of 11 variables is X1,X3,X9,X6,X11,X4,X10,X8,X5,X2,and X7.Compared with X1,X3,X9,X6,X11,X4,and X10,the weight values of X8,X5,X2,and X7 are small and thus of relatively low importance.Therefore,the first 7 variables X1,X3,X9,X6,X11,X4,and X10 are selected as key variables.Their new normalized weight values are listed in Table 9.
In this case,the data sample includes 100 observations with 1 min as sampling interval under the normal condition.Alarm number is distributed among the selected 7 variables only.The initial alarm frequency should be set higher than an alarm per minute to make the alarm frequency calculated with final alarm threshold no more than 1 per minute.If the initial alarm frequency is 1 alarm every 0.8 min,there will be 125 alarms totally,as listed in Table 10.
The initial variable thresholds are determined according to Table 10.Alarm type(upper or lower limit)is identified with the normal and abnormal data.With the first variable X1 as an example,the normalfunctions[Eqs.(12)and(13)]of X1 are fitted according to 100 normal data and 64 abnormal data.Their probability density function curves[f(x)and g(x)]are depicted in Fig.7,indicating an upper limit alarm of variable X1.Alarm types of other variables are obtained similarly.
Table 7 Coefficients of 11 variables linear combinations and comprehensive score model
Fig.6.Weight histogram of 11 variables.
Table 9 Weights of 7 variables
Table 10 Number of variable alarms with frequency of 0.8 min an alarm
Fig.7.Process probability density curves of X1.
The control limits of these seven variables and remaining four variables are calculated with xT= μ+Δσ principle and 3σ principle,respectively.Table 11 gives the calculated results.
Table 11 Initial alarm thresholds and control limits of variables with the frequency of0.8 min an alarm
3.4.1.Calculate thresholds of variables by Johnson transformation
The variable X1 is taken as an example to explain variable threshold determination process by inverse transformation of Johnson function.The 100 observations of X1 are firstly processed by Johnson transformation.The value of z is selected as 0.53 through trial and error.The transformation and inverse transformation process are shown in Eqs.(14)and(15)with parametersγ,η,ε,λandε+λ given in Table 12.
Table 12 Parameter values of X1 in Johnson transformation
The normal distribution function F′(x)after transformation is obtained[Eq.(16)]by fitting the transformed data.Fig.8(a)and(b)show the histograms of X1 before and after transformation,respectively.
The threshold of transformed variable X1 is obtained according to Δ =1.239 in Table 11 and x′=μ′+ Δσ′principle.The final threshold of variable X1 is calculated by inverse transformation[Eq.(15)].Table 13 lists the threshold of variable X1 and other six variables.Variable thresholds calculated before and after transformation are listed in Table 14.
Table 14 Alarm thresholds of variables before and after Johnson transformation
Next,variables and their alarm thresholds before(a)and after(b)Johnson transformation are visualized in parallel coordinate in Fig.9,including 100 observations of normal data.
The green and red areas in Fig.9 represent normal and abnormal conditions,respectively.The green area in Fig.9(b)is larger than Fig.9(a),indicating that the number of alarms obtained with the thresholds from Johnson transformation is greatly reduced,leading to an increasing normal working period.Consequently,the method given in this paper can optimize thresholds and reduce mass alarms effectively.
After optimizing variable thresholds,part original abnormal data change into normal data,causing many lines in parallel coordinate diagram changing from red to green.But the abnormal area does not look decreasing obviously because some optimized variable data are concealed by the red lines in Fig.9(b).Thus,it is necessary to calculate the false alarm rate and the number of variable alarms to further clarify the optimizing effect of the proposed method on thresholds.
3.4.2.Calculate false alarm rate(FAR)and missed alarm rate(MAR)
The false alarm rate(FAR)and missed alarm rate(MAR)are used as indicators to verify the threshold improvement after transformation in this section with variable X1 as an example.
There are two control limits for alarm thresholds,that is,upper limit(UL)and lower limit(LL).For the upper limit threshold xT,alarms will be activated when variable value lies beyond xT,where the probability density function of process variable x in normal condition is f(x)and the abnormal one is g(x).
Fig.9.Parallel coordinate charts of 11 variables before and after Johnson transformation.
The probability density curves of process variable X1 are shown in Fig.10.Given a threshold xT,FAR and MAR can be obtained by Eqs.(17)and(18),respectively.
Fig.10.Process probability density curves of variable.
The FAR and MAR of variable X1 and other six variables before and after Johnson transformation are calculated according to Eqs.(17)and(18),with results listed in Table 15.
Table 15 FARs and MARs of 7 variables before and after Johnson transformation
Fig.11 shows the FAR histogram of 7 variables before and after Johnson transformation according to Table 15,where the light and deep colors represent FARs before and after Johnson transformation,respectively.It can be seen that the proposed method can reduce the false alarm rate with percentage more than 15%on average.
Although the MARs in Table 15 increase,the process variables still remain in normal working conditions in fact.Therefore,the increasing MAR can be regarded as a normal condition and its effect can be ignored to some extent.
3.4.3.Calculate the number of variable alarms before and after Johnson transformation
Table 16 shows the number of variable alarms before and after Johnson transformation according to the obtained thresholds of variables.
Fig.11.FAR histograms of 7 variables before and after Johnson transformation.
Table 16 Number of variable alarms before and after Johnson transformation
As shown in Table 16,the number of total alarms after transformation decreases by 20%.Because the sampling period is 100 min,the alarm frequency before transformation is one alarm every 0.8 min,and the one after transformation is one alarm every 1.02 min averagely.Therefore,the alarm frequency conforms to the international standard specified alarm limit in unit time and the method proposed in this paper can effectively optimize variable alarm threshold.
At present,one of the main problems in alarm system is that a large number of invalid alarms,most of which are false alarms,will be generated due to the unreasonable setting of alarm threshold.Common alarm threshold setting methods include model-based,knowledge-based and statistical-based methods.In the actual industrial process,the most widely used method is 3σ principle method.It is based on the historical data of process variables in the normal state.The mean value μ and the variance σ are calculated correspondingly to set the threshold range in the interval[μ− 3σ,μ+3σ].According to the probability theory,the probability of one data point falling within this interval is 99.73%,and the probability beyond this range is only 0.27%,which belongs to a small probability event.
Therefore,the 3σ method is chosen as the reference method to perform comparison with our proposal.Table 17 lists the FAR and MAR of variables obtained by 3σ method.
In the same way,100 sets of data in normal working conditions are taken as normal data for 7 variables.The comparison of proposed method before,after transformation with 3σ principle is shown in Table 18.
In Table 18,the alarm frequency after transformation is 100/101=0.99 alarms·min−1which meets the requirements of international standard.Comparing Table 16 with Table 18,we can see that the number of alarms before and after transformation decreases and the optimization effect of the latter is significant.In Table 18,the fewest alarm number is obtained by the 3σ method,which focuses on the optimization of a set of variable data conforming to normal or approximate normal distribution,but Table 17 shows that,its missed alarm rateis very large(close to 1),meaning great safety risks in production.Comparatively,our proposed method focuses on multivariate dimension compression and gives high weighted variables more sensitive alarms to realize the optimization of variable thresholds.Therefore,our proposal is advantageous in optimizing alarm thresholds for key variables.
Table 17 The threshold,FAR,and MAR of 7 variables under the case of 3σ rule
This paper proposed a novel PCA weight and Johnson transformation based alarm threshold optimization method to reduce alarm number in chemical processes.Variable weights are firstly calculated by the method of PCA,and some important variables are selected according to their weights.The alarm control limits of these variables are then calculated based on the weights and required alarm frequency.After that,the thresholds of important variables are calculated with Johnson normal transformation and inverse transformation under the same percentage of control limit.The case study in atmospheric-vacuum crude distillation example shows that the FAR calculated by this method decreases greatly and the number of variable alarms conforms to the international standard specified alarm limit in unit time.Therefore,this method can avoid many invalid alarms,focus operator's attention on some important alarms,and keep devices running more safely and effectively.
Further research will be carried out around low MAR to receive a more stable and safer operation.
我们致力于保护作者版权,注重分享,被刊用文章因无法核实真实出处,未能及时与作者取得联系,或有版权异议的,请联系管理员,我们会立即处理! 部分文章是来自各大过期杂志,内容仅供学习参考,不准确地方联系删除处理!