Saturday, September 7, 2013

A short tutorial of nonlinear circuit theory - part 1

Today we look at applications of dynamical system theory in electrical engineering.  How is dynamical systems important for electrical engineering?  At the most fundamental level, electrical engineering deals with the physics of electricity.  We can describe electrical and magnetic phenomena via Maxwell's equations and the interaction of electrons, ions, and protons in physical matter.  Such mathematical models in general give us partial differential equations. However, if we describe things in such detail, the resulting equations will be so complicated that either it takes a long time to simulate the system or very little useful information can be extracted from the equations. Therefore in many electrical circuit applications, we will assume the lumped circuit approximation.  In a lumped circuit, electrical signals are propagated instantaneously through wires.  Exceptions to this approximation occur in applications such as high speed electronics and wireless electronics.  What follows is a brief introduction to nonlinear circuit theory. A more in depth study can be found in Ref. [1] and Ref. [2]. In general an electrical circuit consists of devices connected with wires. Because of the lumped circuit approximation, the lengths of the wires are irrelevant. Each device can be thought of as a black box with n terminals, in which case it is called an n-terminal device.  For example, a resistor or a capacitor is a 2-terminal device and a transistor is a 3-terminal device.  Depending on our level of abstraction, a device could be a transistor, an amplifier consisting of many transistors, or a complicated device such as a microprocessor with billions of transistors.  Each  n -terminal device can be represented by a digraph with  n  branches connected at the datum node (Fig. 1).



Figure 1

Resistive and dynamic 2-terminal devices
In this short introduction we will only focus on electrical circuits composed of 2-terminal devices.  The two most important physical quantities in electrical circuits are voltages (v) and currents (i).  Two related quantities are the electrical flux linkage ($\phi$) and charge (q) which are related to  v  and  i  via:

\[ i = \frac{dq}{dt},\qquad v = \frac{d\phi}{dt}
\]

A 2-terminal device has two terminals and can be represented abstractly as a digraph with one branch and an associated reference direction:

A device which relates the voltage across it and the current through it is called a resistor.  In particular, it is described by the constituency relation $R(v,i) = 0$.  A device which relates the charge on it with the voltage across it is called a capacitor and is described by the constituency relation $C(q,v) = 0$.  A device which relates the flux linkage across it with the current through it is called an inductor and is described by $L(\phi,i) = 0$.  A resistor is called voltage-controlled if the constituency relation can be written as a function of voltage, i.e. $i = f(v)$.  Similar definitions exist for current-controlled resistors, charge-controlled capacitors etc. Resistors are static devices while capacitors and inductors are dynamic devices.  As we will see shortly, a circuit without dynamic devices has no dynamics.  The introduction of dynamic devices allow us to describe the behavior of the circuit as a continuous-time dynamical system.

What about a device which relates flux linkage and charge via a relation $M(\phi, q) = 0$?  Such devices are called memristors (MEMory ResISTOR) as postulated by Leon Chua in 1971 (Ref. [3]). Even though such devices can be synthesized using transistors and amplifiers, an elementary implementation did not exist until Stan Williams and his team from HP developed memristors using titanium dioxide in 2008 (Ref. 4).

Kirchoff's Laws 
A electrical circuit can be represented abstractly as a digraph with a branch current and branch voltage associated to each branch.  We assume that the circuit is connected, i.e. the corresponding digraph is (weakly) connected (though not necessarily strongly connected).  Consider a connected digraph with n nodes and branches.  Since the digraph is connected, $b\geq n-1$.  The node-edge incidence matrix (or just incidence matrix) associated to a digraph is an n by b matrix A such that $A_{jk} = 1$ if branch k leaves node j$A_{jk} = -1$ if branch k enters node j, and $0$ otherwise. 

Kirchoff's laws describe the relationships which these currents and voltages need to satisfy due to physical constraints such as charge conservation.

Kirchoff's current law (KCL) states that the sum of the currents entering a node is equal to the sum of the currents leaving the node.  A moment's thought will convince you that this requirement for the j-th node corresponds to the product between the j-th row of A and the current vector i be equal to 0. Thus Kirchoff's current law can be stated as Ai = 0.

Theorem: the matrix A has rank n-1.

Proof: the i-th column of consist of exactly one entry 1 and one entry -1 corresponding to the nodes that branch i is connected to. Consider rows of with k. Suppose a linear combination of these equations with nonzero coefficients sums to zero. Since the graph is connected, there is a node m which is connected by a branch to a node in these nodes. The corresponding column in A has a 1 (or -1) in the  m-th row and a -1 (or 1) in a row belonging to the k  rows. Therefore a linear combination of these equations with nonzero coefficients cannot sums to zero. This show that the rank of A is at least n-1.  The sum of the rows of A is zero, so cannot be full rank, and thus the rank of A is exactly n-1. $\blacksquare$

Thus we can delete one row from without losing any information since this row is always a linear combination of the other rows.  In the following we assume that A is a n-1 by b matrix with rank n-1.

For a branch k which goes between nodes $j_1$ and $j_2$, Kirchoff's voltage law (KVL) states that the voltage $v_k$ across branck is equal to the difference $e_{j_1}-e_{j_2}$ where $e_{j}$ is the voltage between the j-th node and the datum node.  It is easy to see that Kirchoff's voltage law can be expressed as $A^T$e-v = 0 or $A^T$e = v

Thus $v$ is in the range of the matrix $A^T$.  Since $A^T$ has rank n-1, another way to say this is that v lies in a n-1-dimensional subspace of ${\Bbb R}^b$.  Thus we can also express KVL using the orthogonality conditon Bv = 0 where B is b-(n-1) by matrix.  One such matrix B  is the fundamental loop matrix and describes a set of b-(n-1) loops in the circuit from which all loops in the circuit can be derived.  This is another way to view KVL: the algebraic sum of branch voltages in a loop is 0.

The complete description of the electronic circuit can be given by KVL, KCL, the constituency relations of the elements and the defining equations $i = \frac{dq}{dt}$ and $v = \frac{d\phi}{dt}$.  This gives us a set of constrained differential equations which describes the system.  It is not always possible to obtain a set of ordinary differential equations.  In the case that we can, it will be given in the form

\begin{equation}\label{eqn:state_circuit} \begin{array}{lclr}
                      \frac{dq}{dt} & =& f_1(q,\phi) & \qquad \mbox{Eq.} (1)\\
                      \frac{d\phi}{dt} & = & f_2(q,\phi) & \end{array}
\end{equation}

where q is the charge vector corresponding to the charges of all the capacitors and $\phi$ is the flux vector corresponding to the flux of the inductors.  The state vector is $(q,\phi)$.  In the case where the capacitors and inductors are linear, we have q = Cv and $\phi = Li$, so for $C\neq 0$ and $L\neq 0$, we can also choose $(v,i)$ as the state vector.  This is more desirable since voltages and currents can be more easily measured.  Furthermore, this change of variables can be also be done if there is a diffeomorphism between q and v and between $\phi$ and i.  The system defined by (v,i) is topologically conjugate to Eq. (1) via the diffeomorphism.  The number of equations in Eq. (1) will correspond the the total number of capacitors and inductors since we only have 2-terminal capacitors and inductors.

When there are no dynamic elements the circuit is called a resistive circuit and the equations of the circuit are given by Ai=0, Bv = 0 and R(i,v) = 0.  This is a set of nonlinear equations, the solutions of which define the operating voltages and currents of the circuit.  There is no dynamics in this system. A solution of these equations is an operating point of the circuit and corresponds to the actual voltages and currents we observe in the circuit.

Now a circuit with more than one operating point presents somewhat of a problem.  What does the physical circuit choose as the operating point?  In practice, the circuit can be at different operating points depending on when or how you power on the system.  To resolve this problem we postulate that parasitic capacitances and inductors are present in parallel and series respectively which results in a dynamic circuit.  Consider the state equations (Eq. (1)).  The equilibrium points correspond to points when $dq/dt = 0$ for the capacitors and $d\phi/dt = 0$ for the inductors.  This corresponds to the capacitor currents equal to zero and inductor voltages equal to zero.  Thus we can replace capacitors by a open circuit and inductors by a short circuit which results in the original resistive circuit.

Thus the operating points of the resistive circuit corresponds to equilibrium points of the dynamic circuit and initial conditions play a role in what equilibrium point we will see at steady state (assuming that the state trajectory of the dynamic circuit does converge to an equilibrium point) which corresponds to the operating point we observe.

In part 2, we will look at some simple circuits and their corresponding state equations.  
References:

[1] L. O. Chua, Introduction to nonlinear network theory, McGraw-Hill, 1969.
[2] L. O. Chua, C. A. Desoer, E. S. Kuh, Linear and nonlinear circuits, McGraw-Hill, 1987. 
[3] L. O. Chua, Memristor-The missing circuit element, IEEE Transactions on Circuit Theory, Sept. 1971, vol. 18, no. 5, pp. 507-519.
[4] D. B. Strukov, G. S. Snider, D. R. Stewart1 & R. S. Williams, The missing memristor found, Nature 453, 1 May 2008, pp. 80-83. 

No comments:

Post a Comment