# Computational modeling of GMI effect in Co-based amorphous ribbons

- Asli Ayten Kaya
^{1}Email author

**2013**:293

https://doi.org/10.1186/1029-242X-2013-293

© Kaya; licensee Springer 2013

**Received: **31 January 2013

**Accepted: **28 May 2013

**Published: **14 June 2013

## Abstract

This paper presents a prediction of a giant magneto-impedance (GMI) effect on Co-based amorphous ribbons using an artificial neural network (ANN) approach based on a self-organizing feature map (SOFM). The input parameters included the compositions of Fe and Co, ribbon width and magnetizing frequency. The output parameter was the GMI effect. The results show that the proposed model can be used for estimation of the GMI effect in the amorphous ribbons.

## Keywords

## Dedication

Dedicated to Professor Hari M Srivastava

## Introduction

When a soft ferromagnetic conductor is subjected to an alternating current, a large change in the complex impedance of the conductor can be achieved upon applying a magnetic field. This phenomenon is known as the giant magneto-impedance (GMI) effect [1]. This effect has received increasing attention for its potential applications in highly sensitive magnetic sensors [2].

The ANN is simply a class of mathematical algorithms, since a network can be regarded essentially as a graphic notation for a large class of algorithms. Such algorithms produce solutions to a number of specific problems [3].

In this investigation, the GMI effects are modeled using self-organizing feature map (SOFM) and previous experimental data [4, 5] of amorphous ribbons made from Co_{70}Fe_{5}Si_{15}B_{10} and Co_{70.4}Fe_{4.6}Si_{15}B_{10} alloys.

## Experimental details

### Self-organizing feature map (SOFM)

Self-organizing feature maps (SOFM), also known as Kohonen maps or topographic maps, were first introduced by von der Malsburg (1973) and in its present form by Kohonen (1982). SOFM is a special neural network that accepts *N*-dimensional input vectors and maps them to the Kohonen layer, in which neurons are organized in an *L*-dimensional lattice (grid) representing the feature space. Such a lattice characterizes a relative position of neurons with regards to their neighbors, that is, their topological properties rather than exact geometric locations. In practice, dimensionality of the feature space is often restricted by its visualization aspect and typically is $L=1,2\text{or}3$. The objective of the learning algorithm for the SOFM neural networks is formation of the feature map which captures the essential characteristics of the *N*-dimensional input data and maps them on the typically 1-D or 2-D feature space [6].

*i*th component of the weight vector ${\mathbf{w}}_{j}$ of the neuron ${n}_{j}$ and the pattern ${\mathbf{u}}^{\mathrm{k}}$ applied to the input layer respectively, $\eta (t)$ is the learning rate and $N(j,t)$ is the neighborhood function which is changing in time. The learning algorithm captures two essential aspects of the map formation, namely competition and cooperation between neurons of the output lattice. Competition determines the winning neuron ${d}_{\mathrm{win}}$, whose weight vector is the one closest to the applied input vector. For this purpose, the input vector

**u**is compared with each weight vector ${\mathbf{w}}_{j}$ from the weight matrix

**W**, and the index of the winning neuron ${n}_{\mathrm{win}}$ is established considering the following formula:

where $pos(\cdot )$ is the position of the neuron in the lattice [6].

### SOFM training algorithm

- 1.
Assign small random values to weights ${\mathbf{w}}_{j}=[{w}_{1j},{w}_{2j},\dots ,{w}_{nj}]$;

- 2.
Choose a vector ${\mathbf{u}}^{\mathrm{k}}$ from the training set and apply it as input;

- 3.Find the winning output node ${n}_{\mathrm{win}}$ by the following criterion:${n}_{\mathrm{win}}=\underset{j}{argmin}\parallel \mathbf{u}-{\mathbf{w}}_{j}\parallel ,$

*j*;

- 4.Adjust the weight vectors according to the following update formula:${w}_{ij}(t+1)={w}_{ij}(t)+\eta (t)({u}_{i}-{w}_{ij})({u}_{i}-{w}_{ij})N(j,t),$

*i*th component of the weight vector ${\mathbf{w}}_{j}$, $\eta (t)$ is the learning rate and $N(j,t)$ is the neighborhood function;

- 5.
Repeat Steps 2 through 4 until no significant changes occur in the weights [6].

### Developed ANN model

*H*), ribbon width (

*l*), magnetizing frequency (

*f*), concentration of Co (Co%) and concentration of Fe (Fe%). The output parameter was the GMI effect (GMI%). The number of hidden layers, neurons in each hidden layer and training parameters were determined through trial and error to be optimal. After several trials, a better result was obtained from the network a four-layered network. In this network, the hyperbolic tangent function is used in the hidden and output layers. The number of epochs was 30,000 for training.

## Results and discussion

In this study an attempt was made to predict the GMI effect of Co-based amorphous ribbons (Co_{70}Fe_{5}Si_{15}B_{10} and Co_{70.4}Fe_{4.6}Si_{15}B_{10}) using artificial neural networks. To achieve this goal, magnetizing field (*H*), ribbon width (*l*), magnetizing frequency (*f*), concentration of Co (Co%) and concentration of Fe (Fe%) were used as the input of networks, and GMI% data points were used as the output of these networks. Finally, network was achieved with the least cross validation error for Co-based amorphous ribbons.

_{70}Fe

_{5}Si

_{15}B

_{10}and Co

_{70.4}Fe

_{4.6}Si

_{15}B

_{10}amorphous ribbons which have 0.5-3 cm ribbon width. The GMI curves achieved from the ANN are in about 99% agreement with the experimental ones.

These results show that the proposed model is potentially useful for sensor designers in predicting the GMI curves in cases when measurements may be time consuming.

## Conclusions

In this study the proposed model was developed from experimental data for the Co-based amorphous ribbons. This study demonstrates the applicability and feasibility of artificial neural network models to predict the GMI effect for Co_{70}Fe_{5}Si_{15}B_{10} and Co_{70.4}Fe_{4.6}Si_{15}B_{10} amorphous ribbons which have different ribbon width such as 0.5, 1 and 3 mm and a frequency range of 0.1-1 MHz. The average correlation and prediction error were found to be 99% and 1% for the tested amorphous ribbons, respectively. These results show that the predicted values of these ribbons are in good agreement with the measured ones. Therefore, this model is appropriate for a researcher to evaluate the sensor performance before manufacture.

## Declarations

## Authors’ Affiliations

## References

- Phan MH: Giant magnetoimpedance materials: fundamentals and applications.
*Prog. Mater. Sci.*2008, 53: 323–420. 10.1016/j.pmatsci.2007.05.003View ArticleGoogle Scholar - Gong WY, Wu ZM, Lin H, Yang XL, Zhao Z: Longitudinally driven giant magneto-impedance effect enhancement by magneto-mechanical resonance.
*J. Magn. Magn. Mater.*2008, 320: 1553–1556. 10.1016/j.jmmm.2008.01.020View ArticleGoogle Scholar - Zurada JM:
*Introduction to Artificial Neural Systems*. West Publishing Company, Eagan; 1992.Google Scholar - Goncalves LAP, Soares JM, Machado FLA, de Azevedo WM: GMI effect in the low magnetostrictive Co
_{70}Fe_{5}Si_{15}B_{10}alloys.*Physica B*2006, 384: 152–154. 10.1016/j.physb.2006.05.210View ArticleGoogle Scholar - Mendes KC, Machado FLA: Enhanced GMI in ribbons of Co
_{70.4}Fe_{4.6}Si_{15}B_{10}alloy.*J. Magn. Magn. Mater.*1998, 177–181: 111–112.View ArticleGoogle Scholar - Halici U: Data Clustering and Self-Organizing Feature Maps. In
*Artificial Neural Networks*. Metu Eee, Ankara; 2004:135–136. EE543 Lecture Notes, ch.8Google Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.