 Research
 Open access
 Published:
Computational modeling of GMI effect in Cobased amorphous ribbons
Journal of Inequalities and Applications volumeÂ 2013, ArticleÂ number:Â 293 (2013)
Abstract
This paper presents a prediction of a giant magnetoimpedance (GMI) effect on Cobased amorphous ribbons using an artificial neural network (ANN) approach based on a selforganizing feature map (SOFM). The input parameters included the compositions of Fe and Co, ribbon width and magnetizing frequency. The output parameter was the GMI effect. The results show that the proposed model can be used for estimation of the GMI effect in the amorphous ribbons.
Dedication
Dedicated to Professor Hari M Srivastava
Introduction
When a soft ferromagnetic conductor is subjected to an alternating current, a large change in the complex impedance of the conductor can be achieved upon applying a magnetic field. This phenomenon is known as the giant magnetoimpedance (GMI) effect [1]. This effect has received increasing attention for its potential applications in highly sensitive magnetic sensors [2].
The ANN is simply a class of mathematical algorithms, since a network can be regarded essentially as a graphic notation for a large class of algorithms. Such algorithms produce solutions to a number of specific problems [3].
In this investigation, the GMI effects are modeled using selforganizing feature map (SOFM) and previous experimental data [4, 5] of amorphous ribbons made from Co_{70}Fe_{5}Si_{15}B_{10} and Co_{70.4}Fe_{4.6}Si_{15}B_{10} alloys.
Experimental details
Selforganizing feature map (SOFM)
Selforganizing feature maps (SOFM), also known as Kohonen maps or topographic maps, were first introduced by von der Malsburg (1973) and in its present form by Kohonen (1982). SOFM is a special neural network that accepts Ndimensional input vectors and maps them to the Kohonen layer, in which neurons are organized in an Ldimensional lattice (grid) representing the feature space. Such a lattice characterizes a relative position of neurons with regards to their neighbors, that is, their topological properties rather than exact geometric locations. In practice, dimensionality of the feature space is often restricted by its visualization aspect and typically is L=1,2\text{or}3. The objective of the learning algorithm for the SOFM neural networks is formation of the feature map which captures the essential characteristics of the Ndimensional input data and maps them on the typically 1D or 2D feature space [6].
During training, the weights are updated according to the formula:
where {w}_{ij} and {u}^{i} are the i th component of the weight vector {\mathbf{w}}_{j} of the neuron {n}_{j} and the pattern {\mathbf{u}}^{\mathrm{k}} applied to the input layer respectively, \mathrm{\xce\xb7}(t) is the learning rate and N(j,t) is the neighborhood function which is changing in time. The learning algorithm captures two essential aspects of the map formation, namely competition and cooperation between neurons of the output lattice. Competition determines the winning neuron {d}_{\mathrm{win}}, whose weight vector is the one closest to the applied input vector. For this purpose, the input vector u is compared with each weight vector {\mathbf{w}}_{j} from the weight matrix W, and the index of the winning neuron {n}_{\mathrm{win}} is established considering the following formula:
All neurons {n}_{j} located in a topological neighborhood of the winning neurons {n}_{\mathrm{win}} will have their weights updated usually with strength N(j) related to their distance d(j) from the winning neuron, where d(j) can be calculated using the formula
where pos(\xe2\u2039\dots ) is the position of the neuron in the lattice [6].
SOFM training algorithm

1.
Assign small random values to weights {\mathbf{w}}_{j}=[{w}_{1j},{w}_{2j},\xe2\u20ac\xa6,{w}_{nj}];

2.
Choose a vector {\mathbf{u}}^{\mathrm{k}} from the training set and apply it as input;

3.
Find the winning output node {n}_{\mathrm{win}} by the following criterion:
{n}_{\mathrm{win}}=\underset{j}{argmin}\xe2\u02c6\yen \mathbf{u}\xe2\u02c6\u2019{\mathbf{w}}_{j}\xe2\u02c6\yen ,
where \xe2\u02c6\yen \xe2\u2039\dots \xe2\u02c6\yen denotes the Euclidean norm and {\mathbf{w}}_{j} is the weight vector connecting input nodes to the output node j;

4.
Adjust the weight vectors according to the following update formula:
{w}_{ij}(t+1)={w}_{ij}(t)+\mathrm{\xce\xb7}(t)({u}_{i}\xe2\u02c6\u2019{w}_{ij})({u}_{i}\xe2\u02c6\u2019{w}_{ij})N(j,t),
where {w}_{ij} is the i th component of the weight vector {\mathbf{w}}_{j}, \mathrm{\xce\xb7}(t) is the learning rate and N(j,t) is the neighborhood function;

5.
Repeat Steps 2 through 4 until no significant changes occur in the weights [6].
Developed ANN model
Two different amorphous ribbons were used for the experimental verification of the proposed model. A total of 1,200 input vectors were obtained from the amorphous ribbons [4, 5]. The developed neural network, which has five input neurons, one output neuron, two hidden layers, nine and twelve neurons of hidden layers and full connectivity between neurons, was shown in Figure 1. The input parameters were magnetizing field (H), ribbon width (l), magnetizing frequency (f), concentration of Co (Co%) and concentration of Fe (Fe%). The output parameter was the GMI effect (GMI%). The number of hidden layers, neurons in each hidden layer and training parameters were determined through trial and error to be optimal. After several trials, a better result was obtained from the network a fourlayered network. In this network, the hyperbolic tangent function is used in the hidden and output layers. The number of epochs was 30,000 for training.
Results and discussion
In this study an attempt was made to predict the GMI effect of Cobased amorphous ribbons (Co_{70}Fe_{5}Si_{15}B_{10} and Co_{70.4}Fe_{4.6}Si_{15}B_{10}) using artificial neural networks. To achieve this goal, magnetizing field (H), ribbon width (l), magnetizing frequency (f), concentration of Co (Co%) and concentration of Fe (Fe%) were used as the input of networks, and GMI% data points were used as the output of these networks. Finally, network was achieved with the least cross validation error for Cobased amorphous ribbons.
As it is indicated in Figure 2, the predicted GMI data points using neural networks follow the experimental results in an appropriate manner. It is obvious that the predicted GMI%, using neural networks and the experimental results, is found to be a perfect match for the trained data.
Figure 3 shows the GMI effect obtained from the prediction model and experimental data at 0.11 MHz for Co_{70}Fe_{5}Si_{15}B_{10} and Co_{70.4}Fe_{4.6}Si_{15}B_{10} amorphous ribbons which have 0.53 cm ribbon width. The GMI curves achieved from the ANN are in about 99% agreement with the experimental ones.
All the tested samples in the range of training data have significant correlation coefficients. The ANN model was assessed with 1 mm width amorphous ribbons which are outside the range of the training data. Figure 4 shows good agreement with the experimental data; therefore, the ANN should be used for the prediction and modeling of GMI values.
These results show that the proposed model is potentially useful for sensor designers in predicting the GMI curves in cases when measurements may be time consuming.
Conclusions
In this study the proposed model was developed from experimental data for the Cobased amorphous ribbons. This study demonstrates the applicability and feasibility of artificial neural network models to predict the GMI effect for Co_{70}Fe_{5}Si_{15}B_{10} and Co_{70.4}Fe_{4.6}Si_{15}B_{10} amorphous ribbons which have different ribbon width such as 0.5, 1 and 3 mm and a frequency range of 0.11 MHz. The average correlation and prediction error were found to be 99% and 1% for the tested amorphous ribbons, respectively. These results show that the predicted values of these ribbons are in good agreement with the measured ones. Therefore, this model is appropriate for a researcher to evaluate the sensor performance before manufacture.
References
Phan MH: Giant magnetoimpedance materials: fundamentals and applications. Prog. Mater. Sci. 2008, 53: 323â€“420. 10.1016/j.pmatsci.2007.05.003
Gong WY, Wu ZM, Lin H, Yang XL, Zhao Z: Longitudinally driven giant magnetoimpedance effect enhancement by magnetomechanical resonance. J. Magn. Magn. Mater. 2008, 320: 1553â€“1556. 10.1016/j.jmmm.2008.01.020
Zurada JM: Introduction to Artificial Neural Systems. West Publishing Company, Eagan; 1992.
Goncalves LAP, Soares JM, Machado FLA, de Azevedo WM: GMI effect in the low magnetostrictive Co_{70}Fe_{5}Si_{15}B_{10} alloys. Physica B 2006, 384: 152â€“154. 10.1016/j.physb.2006.05.210
Mendes KC, Machado FLA: Enhanced GMI in ribbons of Co_{70.4}Fe_{4.6}Si_{15}B_{10} alloy. J. Magn. Magn. Mater. 1998, 177â€“181: 111â€“112.
Halici U: Data Clustering and SelfOrganizing Feature Maps. In Artificial Neural Networks. Metu Eee, Ankara; 2004:135â€“136. EE543 Lecture Notes, ch.8
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Authorsâ€™ original submitted files for images
Below are the links to the authorsâ€™ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Kaya, A.A. Computational modeling of GMI effect in Cobased amorphous ribbons. J Inequal Appl 2013, 293 (2013). https://doi.org/10.1186/1029242X2013293
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029242X2013293