时间:2024-08-31
LI Fei (李 飞), HE Pei (何 佩), WANG Xiang-tao (王向涛), ZHENG Ya-fei (郑亚飞), GUO Yang-ming (郭阳明), JI Xin-yu (姬昕禹)
1 School of Management, Northwestern Polytechnical University, Xi’an 710072, China 2 School of Computer Science, Northwestern Polytechnical University, Xi’an 710072, China
Combinatorial Optimization Based Analog Circuit Fault Diagnosis with Back Propagation Neural Network
LI Fei (李 飞)1,2*, HE Pei (何 佩)2, WANG Xiang-tao (王向涛)2, ZHENG Ya-fei (郑亚飞)2, GUO Yang-ming (郭阳明)2, JI Xin-yu (姬昕禹)2
1SchoolofManagement,NorthwesternPolytechnicalUniversity,Xi’an710072,China2SchoolofComputerScience,NorthwesternPolytechnicalUniversity,Xi’an710072,China
Electronic components’ reliability has become the key of the complex system mission execution. Analog circuit is an important part of electronic components. Its fault diagnosis is far more challenging than that of digital circuit. Simulations and applications have shown that the methods based on BP neural network are effective in analog circuit fault diagnosis. Aiming at the tolerance of analog circuit, a combinatorial optimization diagnosis scheme was proposed with back propagation (BP) neural network (BPNN). The main contributions of this scheme included two parts: (1) the random tolerance samples were added into the nominal training samples to establish new training samples, which were used to train the BP neural network based diagnosis model; (2) the initial weights of the BP neural network were optimized by genetic algorithm (GA) to avoid local minima, and the BP neural network was tuned with Levenberg-Marquardt algorithm (LMA) in the local solution space to look for the optimum solution or approximate optimal solutions. The experimental results show preliminarily that the scheme substantially improves the whole learning process approximation and generalization ability, and effectively promotes analog circuit fault diagnosis performance based on BPNN.
analogcircuit;faultdiagnosis;backpropagation(BP)neuralnetwork;combinatorialoptimization;tolerance;geneticalgorithm(GA);Levenberg-Marquardtalgorithm(LMA)
The electronic systems’ reliability has become the key of the normal operation of whole system and the fault diagnosis is attracting more and more attention. Analog circuits play a vital role in ensuring the availability of industrial systems[1-2]. Compared with the digital circuit faults, the analog circuit faults mainly have the following inherent characteristics: discontinuity response, continuous variable element parameters, and element parameter tolerances[3-4], which cause the fault mode too complex to be accurately described, and the fault point is difficult to be located,etc. Therefore, the development of analog circuit fault diagnosis method is slow, and the traditional fault diagnosis theories and methods, such as the fault-dictionary method, the parameter identification method[5], are usually difficult to achieve the desired effect in practice. Now, the analog circuit fault diagnosis theory and methods have become challenging and hot study field[6].
The development of modern intelligent fault diagnosis technology, such as the neural network (NN) based methods[3, 7-9], provides new approaches in analog circuit fault diagnosis. Aiming at the tolerance of analog circuit, this paper chooses back propagation (BP) neural network (BPNN) as basic diagnosis model, and then proposes a scheme of analog circuit fault diagnosis: (1) new training samples are used to train the BPNN to improve the generalization ability for the actual data; (2) the initial weights of BPNN are optimized by genetic algorithm (GA) to avoid local minima; (3) the BPNN is finely tuned with Levenberg-Marquardt algorithm (LMA) method in the local solution space to look for the optimum solution or approximate optimal solutions. In this way, the BPNN model for analog circuit diagnosis has better learning speed and generalization capability.
1.1 The basic principle of BPNN
BPNN has been widely used in practice and is regarded as the core part of forward neural network. It is a kind of learning algorithm for error correction built on the basis of gradient descent method, which organically combines positive spread of the input signal with back-propagation of error signal[10]. A typical BPNN generally is composed of three layers: the input layer, the output layer, and one (or several) hidden layer(s). The topology structure is shown in Fig.1.
Fig.1 The topology of BPNN
In Fig.1,xi,i=1, 2, …,nis the input values of BP neural network, whileyj,j=1, 2, …,mis the output values,nis the input node number,qis the hidden layer node number,mis the output node number, andvandware the weights of BPNN. As seen from Fig.1, BPNN is a nonlinear function, with the input values and output values of network respectively as the independent variable and dependent variable of the function.
If the selection of input and output is appropriate, the analog circuit fault can be transformed into the mapping relations between the input and output, and then the BPNN, with sufficiently training, can be used to analog circuit fault diagnosis.
1.2 GA
GA is a randomized search technique that is based on ideas from the natural biological evolution[11]. GA will typically have five parts: (1) a representation of a guess called a chromosome; (2) an initial pool of chromosomes; (3) a fitness function; (4) a selection function; (5) a crossover operator and a mutation operator. A chromosome can be a binary string or a more elaborate data structure. The initial pool of chromosomes can be randomly produced or manually created. The fitness function measures the suitability of a chromosome to meet a specified objective. The selection function decides which chromosomes will participate in the evolution stage of the genetic algorithm made up by the crossover and mutation operators. The crossover operator exchanges genes from two chromosomes and creates two new chromosomes. The mutation operator changes a gene in a chromosome and creates one new chromosome.
GA is a parallel and global search technique that emulates natural genetic operators. It offers a new and powerful approach to the optimization problems. And it has found extensive applications in solving global optimization searching problems.
1.3 LMA
LMA is the most widely used optimization algorithm. It is a non-linear parameter learning algorithm that converges accurately and quickly[12]. The LMA is a very popular curve-fitting algorithm used to solve generic curve-fitting problems. As for many fitting algorithms, the LMA finds only a local minimum which is not necessarily the global minimum. However, the LMA can be used to validate the proposed mathematical model by finding the values of the parameters involved that best fit the measured data within some acceptable error. The LMA is a popular method of finding the minimum of a function that is a sum of squares of nonlinear functions.
The LMA shows the most efficient convergence during the BP training process because it acts as a compromise, between the first-order optimization method (steepest-descent method) with stable but slow convergence and the second-order optimization method with opposite characteristics[13].
2.1 Training samples optimization
The rationality of training sample data selection and representation will influence the fault diagnosis results and the design of neural network extremely. If the BPNN based diagnosis model is relatively optimal, optimizing training samples is one way to improve diagnosis results.
The extrapolation capability of BPNN is limited and the actual circuit components all have a certain tolerance. The trained BPNN using the nominal value will not get good diagnosis results in practical application. The correct rate of diagnosis will be greatly reduced. Here a new way is proposed to improve the generalization ability of the network from the training sample, which also is the way to improve the diagnosis results. That is to add a random tolerance sample into the training sample. By this way, strong classification ability in analog circuit fault diagnosis due to considering the tolerances of the components is obtained.
Suppose there is only one fault mode at same time, then the stochastic simulation methods can be utilized, such as Monte Carlo simulation, to collect the training sample of random tolerance of each fault mode. IfMis the number of fault modes, the number of random tolerance samples is according to the times of stochastic simulations. That is to say, ifPijrepresents one of the random tolerance samples of fault modei, the new samples are made up of the nominal input valuePNandPij. In this paper, the following input samples vector will be applied
(1)
2.2 Optimization of BPNN via GA and LMA
2.2.1 Optimization of BPNN via GA
In this paper, GA is the learning algorithm of BP forward network with fixed network structure. The steps of using GA to optimize the initial weights of neural network are shown as follows.
Step 1 Determine the network structure and learning rules. Encode each weight value generated randomly by some ways, and then arrange the weights in the network to form a code chain. Here each code chain represents a distribution state of weights, and a set of chains represent a BPNN with different weights.
Step 2 Calculate the error function corresponding to each code chain. In this way, the necessary fitness function of genetic algorithm will be determined. The smaller has higher fitness value.
Step 3 Choose several largest fitness functions as the male parent.
Step 4 Generate the new group using current generation group via the crossover and mutation operator.
Step 5 Repeat the above steps to make the weights distribution evolution continuously until meeting the training target.
The key of the above method is to encode the weight values. Here the specific design and implementation are shown as follows.
(1) Chromosome coding and its description
Binary coding is natural and direct and can be used by crossover and mutation operator directly. In order to improve the coding accuracy, the longer codes should be used.
(2) Adaptive function design
The target of GA search is to get weights with minimum sum of error square which can be calculated according to the generated weights and thresholds, and the fitness function is the reciprocal of the error function described as follows:
(2)
(3) Selection operation
The new ranking selection rule is as follows: the firstnindividuals will be made as two copies, the lastnindividuals are eliminated, and the middleL-2nindividuals are made as one copy.
(4) Crossover calculation
Since the real number coding needs to be calculated directly in the problem space, and the two new chromosomes would be generated through the linear combination of the two old chromosomes.
(5) Mutation calculation
The non-uniform mutation operator is utilized to implement the mutation calculation according to the real number coding.
2.2.2 Optimization of BPNN via LMA
When the LMA optimization is applied to the BPNN, the update formula of BPNN weights and thresholds is shown as follows
X(k+1)=X(k)-(JTJ+μI-1)JTe,
(3)
whereJis the Jacobi matrix of differential error on weight,eis the error vector, andμis a scalar. The amplitude ofμchanges smoothly between the two extremes, that are the Newton Method (whenμ→0) and the Steepest Descent Method (whenμ→∞). With the increasing ofμ,JTJcan be ignored, so the learning process mainly depends on the gradient descent.
2.3 Analog circuit fault diagnosis
Set up the analog circuit diagnosis model based on the optimized BPNN via optimized training samples. The diagnosis steps can be described as follows.
Step 1 Based on nominal value of each circuit component, collect the testing data via simulation;
Step 2 Consider the component tolerance, add suitable random tolerance into the testing data, and establish the new training samples;
Step 3 Design initial BPNN structure and select its parameters;
Step 4 Use the training sample to train the BPNN with the proposed scheme described in Sections 2.1 and 2.2;
Step 5 Set up the analog circuit fault diagnosis model and perform diagnosis.
In order to examine the fault diagnosis efficiency of the proposed model, this section performs the following experimental simulation. Here, as shown in Ref.[14], the video amplifying circuit (shown in Fig.2) will be analyzed.
Fig.2 The video amplifying circuit presented in Ref. [14]
In Fig.2, there are eleven transistors (Q1-Q11) and eighteen resistors in the circuit. Same as Ref.[14], fourteen transistors fault modes are chosen, shown in Table 1 with F0 -F14. And in Table 1, B represents the base, E represents the emitter, C represents the collector, S represents the short circuit, and O represents open circuit. For example, the fault mode “Q1BES” means the base and emitter of transistor Q1 are short circuit.
Table 1 Fault modes
According to the optimal selection method of test points[15], the nodes 4, 10, 12, 15, 21, 22, and 23 are selected to be the test points, and they are represented as V4, V10,etc. so the test vector is a 7 dimensional vector. Establish faults table of each test point under the fault state and normal state. The correspondence between sample data normalized and the faults of each test point are shown in Table 2. These data are the ideal training samples.
Fifty times stochastic simulations under 5% component tolerance are executed, and select 10 sample data with equal intervals as the random tolerance training sample. The new training samples contain these training samples and the ideal training samples. Here, set the population size as 100, crossover and mutation probabilities as 0.4 and 0.005 respectively, and the max evolutionary generation number is 100. In order to comparatively analyze the experimental results, the BPNN structure is the same as Ref.[14], that is the node number of the input layer, two hidden layer and the output layer are 7, 22, 18, and 15, the activation function is sigmoid function, and the error is less than 0.01. The feature vector and its diagnostic results are shown in Tables 3 and 4, respectively.
Table 2 The normalized sample data of fault modes
Table 3 The actual feature vector of Fig.1 (normalized)
Table 4 The diagnostic results of actual feature vector of Fig.1
In Table 4, the blank values are 0.0000, * means diagnostic error. If the outputyiis close to 1 and the others of the row equal to 0 approximately, that means diagnostic right to the corresponding state. Figure 3 and Table 4 show that (1) the accuracy of the proposed fault diagnosis model is more than 90%, and it is a better method; (2) with the same precision error, the number of iterations is more than 6 000 in Ref.[14] while the proposed BPNN only needs 374 epochs. Thus, the proposed method in this paper improves the learning speed and the ability of generalization of the network when diagnosing the analog circuit, and the diagnosis results are better.
This paper discusses a combinatorial optimization scheme of analog circuit fault diagnosis based on BPNN. This method aims at the tolerance of analog circuit components, and a random tolerance sample is added into the nominal training sample to establish new training sample. As a result, this method effectively improves the performance of analog circuit fault diagnosis. Moreover, the GA optimization is adopted to determine the initial weights of BPNN by replacing the random initial weights with a better search space. After that, the optimal solution or approximate optimal solution is obtained by finely tuning the BPNN in the local solution space using LMA. This increases the convergence speed of BPNN. The experimental results show that the proposed scheme has good diagnosis performance in analog circuit fault diagnosis, and it is a valuable method in applications.
[1] Potamianos P G, Mitronikas E D, Safacas A N. Open-Circuit Fault Diagnosis for Matrix ConverterDrives and Remedial Operation Using Carrier-Based Modulation Methods[J].IEEETransactionsonIndustrialElectronics, 2014, 61(1): 531-545.
[2] Gritli Y, Zarri L, Rossi C,etal. Advanced Diagnosis of Electrical Faults in Wound-rotor Induction Machines[J].IEEETransactionsonIndustrialElectronics, 2013, 60(9): 4012-4024.
[3] Du X, Tang D Q, Yang Y Z. The Development of Analog Circuit Fault Diagnosis Technology [J].MeasurementandControlTechnology, 2003, 22(7): 1-3.
[4] Guo F Q. Analog Circuit Fault Diagnosis Based on Fuzzy Neural Network [J].JournalofShaanxiUniversityofTechnology:NaturalSciences, 2009, 25(4): 20-25. (in Chinese)
[5] Han B R, Wu H Y, Huang G. Researches of Analog Circuit Fault Diagnosis Based on Wavelet Neural Network [J].ElectronicsQuality, 2009(10): 34-36. (in Chinese)
[6] Guo Y M, Wang X T, Liu C.etal. Electronic System Fault Diagnosis with Optimized Multi-kernel SVM by Improved CPSO [J].MaintenanceandReliability, 2014, 16(1): 85-91.
[7] Boukra T, Lebaroud A, Clerc G. Statistical and Neural-network Approaches for the Classification of Induction Machine Faults Using the Ambiguity Plane Representation [J].IEEETransactionsonIndustrialElectronics, 2013. 60(9): 4034-4042.
[8] Toma S, Capocchi L, Capolino G A. Wound-Rotor Induction Generator Inter-Turn Short-Circuits Diagnosis Using a New Digital Neural Network [J].IEEETransactionsonIndustrialElectronics, 2013, 60(9): 4043-4052.
[9] Kumar A, Singh A P. Neural Network Based Fault Diagnosis in Analog Electronic Circuit Using Polynomial Curve Fitting [J].InternationalJournalofComputerApplications, 2013, 61(6): 28-34.
[10] Hagras H. Embedding Computational Intelligence in Pervasive Spaces [J].IEEEPervasiveComputing, 2007, 6(3): 85-89.
[11] Holland J H. Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence [M]. Oxford, England: U Michigan Press, 1975.
[12] Shawash J, Selviah D R. Real-time Nonlinear Parameter Estimation Using the Levenberg-Marquardt Algorithm on Field Programmable Gate Arrays [J].IEEETransactionsonIndustrialElectronics, 2013, 60(1): 170-176.
[13] Hagan M T, Menhaj M B. Training Feed-forward Networks with the Marquardt Algorithm [J].IEEETransactionsonNeuralNetworks, 1994, 5(6): 989-993.
[14] He Y G, Liang G C.The Method of BP Neural Network for Analog Circuit Fault Diagnosis [J].JournalofHunanUniversity:NaturalSciences, 2003, 30(5): 35-39. (in Chinese)
[15] Yang S Y. Fault Diagnosis and Reliability Design of Analog Systems [M]. Beijing, China: Tsinghua University Press, 1993. (in Chinese)
National Natural Science Foundation of China (No. 61371024); Aviation Science Fund of China (No. 2013ZD53051); Aerospace Technology Support Fund of China; the Industry-Academy-Research Project of AVIC, China (No. cxy2013XGD14); the Open Research Project of Guangdong Key Laboratory of Popular High Performance Computers/Shenzhen Key Laboratory of Service Computing and Applications, China
1672-5220(2014)06-0774-05
Received date: 2014-08-08
* Correspondence should be addressed to LI Fei, E-mail: lifei@nwpu.edu.cn
CLC number: TN46 Document code: A
我们致力于保护作者版权,注重分享,被刊用文章因无法核实真实出处,未能及时与作者取得联系,或有版权异议的,请联系管理员,我们会立即处理! 部分文章是来自各大过期杂志,内容仅供学习参考,不准确地方联系删除处理!