**The User Equipment (UE)****The UMTS Terrestrial Radio Access Network (UTRAN)****The Core Network (CN)**

As with any wireless network out there, the main purpose is the provide access to services (data, voice, etc.) The services network is divided into **Public Switched Telephone Network (PSTN)** that provides voice and special telephone related services (look it up on wiki), and the internet, which provides a wide range of packet data services such as email or access to the world wide web. These things are probably all sounding very familiar to you, and they should, because they are critical in maintaining today’s society, and almost everyone living in the modern world uses at least one of these services sometimes in their daily lives.

The UMTS mobile, also known as the User Equipment (UE), interfaces with the UTRAN via the UMTS physical layer radio interface. In addition to radio access, the UE provides the subscriber with access to services and profile information. For example, the cell phone you carry in your pocket is the UE (user equipment) that interfaces with the cell phone towers that companies like Verizon, AT&T, and Sprint provide for you.

In UMTS, there are two Core Network (CN) configurations, the Circuit Switched CN (CS-CN) and Packet Switched CN (PS-CN). The CS-CN is based on the GSM Public Land Mobile Network (PLMN) and provides functions such as connectivity to the PSTN, circuit telephony services such as voice, and supplementary services such as call forwarding, call waiting, etc. The PS-CN is based on the GSM General Packet Radio System (GPRS) PLMN, which provides access to the Internet and other packet data services.

Both core networks connect to the UMTS Terrestrial Radio Access Network. The UTRAN has two options for its air interface operations. One option is Time Division Duplex (TDD), which makes use of a single 5 MHz carrier for communication between the UE and the UTRAN. The other option is the Frequency Division Duplex (FDD), which provides full duplex operation using 5 MHz of spectrum in each direction to and from the UTRAN.

How do all of these components fit together? Check out the image below.

The UTRAN consists of one or more **Radio Network Subsystems (RNS)**. An RNS consists of one **Radio Network Controller (RNC)** and several Nodes. The radio network controller and the nodes are two essential components of UTRAN. Apart from these two component types, UTRAN requires Operation Maintenance Centers (OMC) to perform Operation Administration and Maintenance (OA&M) functionality on the nodes and RNCs. Yes, I know the acronyms are getting a little out of hand, but it is essential to learn them if you want to speak the language! Engineers only speak to each other with acronyms and it is very very annoying indeed.

**Radio Network Controller**: The Radio Network Controller is the master of UTRAN. It handles all aspects of radio resource management within the radio network subsystem. The UMTS chose to use it instead of the base station controller in order to stress the independence of UTRAN from the Core Networks (CN). It interfaces with the core network components such as the**Mobile Switching Center (MSC)**and**Service GPRS Support Node (SGSN)**to route signaling and traffic from the User Equipment (UE). The RNC also interfaces with other RNC’s within UTRAN to provide it with wide mobility (very important!)

**Nodes**: Within this network, a node is the radio transmission and reception unit within UTRAN (remember above, I explained that the UE is your cell phone you carry in your pocket). It handles radio transmission and reception for multiple cells within a coverage area. So if you think about the amount of area that a certain cell tower (cell) can transmit its signal with the proper quality of service (QoS), you can get a mental grasp on multiple cells being in one coverage area, say if the towers are close together. The node implements CDMA-Specific functionality such as encoding, interleaving, spreading, scrambling & modulation. The nodes are also what used to be known as the**Base Transceiver Subsystems (BTS)**in second generation systems.

It is known that wireless is an inherently error-prone medium in which to operate our delicate signals. Therefore, many error correction techniques are employed. In the UMTS-CDMA systems, due to the large bandwidth available, a variety of coding techniques are employed. The following three error correcting methods come to mind:

Convolutional encoding provides the ability to correct errors at the receiver. So, the errors are removed from the signal by the receiver via **convolutional encoding**. As a result, lower transmission power is required, which can result in more errors. Some amount of errors can be tolerated since they can be recovered through convolutional encoding. The convolutional encoder encodes input data bits int output symbols. The data bits are entered into the first register at each clock cycle and the data bit in the last register is dumped out. Data bits are tapped at various positions and XORed to provide encoded bits. It is typically used for voice and low data rate applications. Here are the main points to keep in mind:

- Provides the ability to detect and correct errors at the receiver.
- 10^(-3) BER, typically used for voice and low data rates.
- Uses history of bits to recover from errors.

Turbo codes are a new class of error correction codes used in digital comm systems. Turbo codes have been shown to perform better for high-rate data services (which is what we crave for) with stringent error rate requirements on the order of 10-6 **Bit Error Rate (BER)**. The turbo encoder consists of two constituent convolutional encoders. Both constituent encoders use and code the same data. The first one is fed data in the same order as the input data. The second encoder uses a permuted form of the input data and the permuting is accomplished by the use of an **interleaver**, which will be discussed in detail in the following post(s). Again, main points:

- 10^(-6) BER, suitable for high data rates.
- Uses convolutional encoders in parallel to increase reliability.
- Increased delays but better error correction capabilities.

Block interleaving protects data against fading and prevents bursty errors (so imagine a sudden burst in the amplitude of a received signal, think that might saturate you and ruin your signal? You bet.) This is accomplished by providing time diversity, where the bits are separated in time before transmission over the air. This is typically used with FEC codes, since FEC codes are not well suited to handle these bursty errors.

- Method to shuffle bits to prevent errors during deep fade.
- Provides time diversity.

All of these techniques will be described in detail, separately, in the following three articles.

]]>**CDMA2000 **is the successor to **IS-95** systems. CDMA2000 provides a definition for two different options for 3G technologies. IT Differs in the amount of the frequency spectrum that is used. The **Spreading Rate** (SR1) operates in the 1.25 MHz band and is known as a 1x system. Another proposal exists also which is referred to as 1xEV-DO. The 1xEV-DO (1x Evolution for Data Optimized) solution is a data-only solution that enables a bandwidth of 2Mbps without any mechanism for voice. This is the type of data rate that we are all familiar with, the 3G 2Mbps speed of data connection.

The Universal Mobile Telecommunications System (UMTS) is a successor to GSM/GPRS systems. There are also two options for the UMTS networks. The Frequency Division Duplex (FDD) option uses spectrum bands which are paired together. For example, two different 5 MHz bands are used for uplink and downlink. The Time Division Duplex (TDD) option uses an unpaired band. In other words, the same 5 MHz band is shared between uplink and downlink for TDD.

The UWC-136 (Universal Wireless Consortium for IS-136 systems) was originally considered to be the evolution for IS-136 systems. However, the IS-136 system operators eventually decided to follow the path of CDMA2000 or UMTS.

Back in the late 1990’s, when most of the readers out there were still playing in the sandbox, the International Telecommunication Union (ITU) set the requirements for the next generation of wireless networks (that is why they are called Third Generation (3G)). One of the many many requirements is to reach peak data rates of at least 2 Mbps. This is more relvant to the Downlink since the majority of traffic comes from the server to the client in the Internet World.

To meet this new high speed requirement, the 2nd generation wireless networks came up with several different evolutions before eventually being replaced. The GSM evolution includes GPRS and EDGE, which provide packet data services and represent intermediate solutions until a UMTS Release 99 System is deployed. The 1xEV-DO is one possible evolution path from 1xRTT, and HSDPA is a Release 5 feature of UMTS.

UMTS is the network of choice these days. Yes, UMTS is 3G…If you haven’t caught that yet. For those nerds out there that are curious, the evolution of UMTS has progressed over the years in the following fashion:

**UMTS Release 99**

- 2 Mbps theoretical peak packet data rates
- 384 kbps (practical)

**UMTS Release 5**

- HSDPA (14 Mbps downlink theoretical)
- IMS (IP Multimedia Subsystem for multimedia)
- UP UTRAN (for scalability and lower cost)

**UMTS Release 6**

- HSUPA (up to 5.76 Mbps uplink)
- MBMS (Multimedia Broadcast Multicast Service)

**UMTS Release 7**

- Multiple Input Multiple Output (MIMO) Antenna Systems

Before reading any further, it is important to first understand this:* in mathematics, there is a rule that states that any periodic function of time may be “reconstructed” exactly from the summation of an infinite series of harmonic sine-waves*. The generalized theory itself is referred to as a “**Fourier Series**.” For use with arbitrary electronic time-domain signals of period , it may be expressed as:

over the range:

where:

is the magnitude of the 0th harmonic

represents the magnitude of the nth harmonic of cosine wave components

represents the magnitude of the nth harmonic of sine wave components

is the **fundamental frequency**

is the variable that represents instances in time

is the variable that represents the specific harmonic, and is always an integer

This monumental discovery was first announced on December 21, 1807 by historic gentleman Baron Jean-Baptiste-Joseph Fourier.

In order to go from the Fourier Integral to the Fourier Transform, it is necessary to express the previous Fourier Series as a series of ever-lasting exponential functions. Using an orthogonal basis set of signals described by of magnitude , we now write the Fourier Series as:

where is .

The Fourier Integral, also referred to as Fourier Transform for electronic signals, is a mathematical method of turning any arbitrary function of time into a corresponding *function of frequency*. A signal, when transformed into a “function of frequency”, essentially becomes a function that expresses the *relative magnitudes of each harmonic** of a Fourier Series that would be summed to recreate the original time-domain signal. *To see this, observe the following figures:

In order to rebuild a square wave with sines and cosines only, it is necessary to determine the magnitudes of each harmonic used in the Fourier Series, or rather, the Fourier Integral (for **continuous **time-domain signals). The relative magnitudes of these needed harmonics can be displayed graphically as a function of frequency (widely known as a signal’s **frequency spectrum**):

Though the recreation of a signal using an infinite series of sines and cosines is impossible to achieve in the lab, one may get very close. Close enough that the most advanced lab equipment wouldn’t be able to calculate the error due to tolerance specifications. This allows engineers to use Fourier Analysis to work with time-domain signals, such as radio signals, television signals, satellite signals and just about any signal you can think of. By viewing a signal according to what frequency components are contained within it, electrical engineers may concern themselves with magnitude changes in frequency only, and may no longer worry about the signal’s magnitude-changes through time. Not only is this a very practical concept when working in the lab, it also greatly simplifies the mathematics behind signal conditioning in general. In fact, the entirety of the Communications industry owes its success to the Fourier Transform for not only antenna design, but a plethora of other applications.

The derivations that follow have been summarized from Chapter 4 of the textbook “Signal Processing and Linear Systems” by B.P. Lathi, a fine book for students of Communication Systems.

We begin by considering some arbitrary, aperiodic time-domain signal. An example of this kind of wave would be the output of a microphone after a man speaks a few words into it. For the actual signal generated by the changes in voltage as the man spoke, we can use Fourier Analysis to describe it as a summation of exponential functions *if *we instead desire to reconstruct a *periodic *signal composed of the same voice signal *repeating *every seconds. For an accurate description, it is important that is long enough such that the repeating arbitrary signals do not overlap. However, if we let approach , then this “periodic” signal is simply just the voice signal (or, any general arbitrary function) in time we wanted to describe initially. Mathematically, we express:

where is the time-domain function we wish to apply the Fourier Transform on (here, the arbitrary “voice” signal). For the above equation to be true, is equal to:

**where:**

It is important to note here that in practice, the *shape *(aka “envelope”) of a signal’s frequency spectrum is what is of main interest, and the magnitude of the components within the spectrum comes secondary. This is because amplifiers and other signal-conditioning circuits may be built to alter the magnitude in any way one wishes, and will not affect signal frequencies (so long as the circuits are LTI systems). Analyzing the envelope of a signal’s Fourier Transform allows one to use intuitive and mathematically-simplified approaches to signal-processing in general, which we shall see later. For this reason (and also as approaches ) let:

Notice that is simply without the constant multiplier , such that:

which implies that may be written:

Observation of this fact reveals insight: The shorter the period, , the larger the magnitude of the coefficients. But, on the other hand, as , the magnitudes of every frequency component approaches – which is why engineers choose to analyze spectrum envelopes. So, instead of visualizing absolute frequency magnitudes, instead consider that the frequency spectrum simply expresses the *magnitude-density per unit of bandwidth, aka Hz. *And since:

then:

and:

so:

In the limit as we see:

which is referred to as the **Fourier Integral**. is referred to as the **Fourier Transform** of the original aperiodic function , and we express this concept as:

This example is from the same textbook as the previous derivation, and can be found on page 239.

Find the Fourier Transform of: where is an arbitrary constant.

To do this, we apply the Fourier Integral to the function as follows:

Because of the factor, we only integrate from . We simplify for:

Also, we know that . So, for , as :

So:

for:

The relationship between and exhibit beautiful symmetry that help one to develop an intuitive approach to signal analysis. Among all the concepts within electrical engineering, the properties between a time-domain function and its Fourier transform are among the most important to understand. Observe these following properties **that apply for all **:

1.) **Fourier Transform: **Gives an equation to solve for the time-domain function from .

2.) **Inverse Fourier Transform: **Gives an equation to solve for the frequency-domain function from .

3.) **Symmetry Property:** For a given pair of a time-domain signal and its Fourier transform, we note that the time-domain envelope is different in shape when compared to the frequency-domain envelope. However, switching the shape of the two functions with respect to domain (time or frequency), will result in the same envelopes except with different scaling coefficients. For example, a square pulse through time has a frequency spectrum described by a sinc function, and a sinc function through time results in a frequency spectrum described by a square pulse.

4.) **Scaling Property:** Time-scaling a time-domain signal (by a constant ) will result in a magnitude-and-frequency-scaling of the signal’s corresponding frequency spectrum. Also signifies that the longer a signal exists through time, the narrower the bandwidth (collection of frequency components needed to rebuild the signal) of its frequency spectrum.

5.) **Time-Shifting Property: **By time-shifting, or delaying/advancing, a time-domain signal results in a *phase delay* in each of the ever-lasting frequency-components needed to rebuild it. The frequency spectrum is otherwise unchanged – only the phase of each component is shifted.

6.) **Frequency-Shifting Property: **Multiplying a time-domain signal by a sinusoidal signal of some frequency , a method which begets amplitude and frequency modulation (AM/FM), results in the frequency spectrum remains unchanged **except for a shift in frequency for each individual frequency component by ****.**

Lastly, these tables (table 1, table 2) can greatly simplify Fourier analysis when used in signal processing.

]]>

1G systems introduced the cellular concept, in which multiple antenna sites are used to serve an area. The coverage of a single antenna site is called a cell. A cell can serve a certain number of users, and higher-system capacity can be achieved by creating more cells with smaller coverage areas. One distinguishing factor of 1G systems is that they make use of analog radio transmissions, so user information, such as voice, is never digitized. As such, they are best suited for voice communications, since data communications can be cumbersome.

The migration of 1G analog technologies toward 2G technologies began in the late 1980s and early 1990s. The primary motivation was increased system capacity. This was achieved by using more efficient digital radio techniques that enabled the transmission of digitized compressed speech signals. These digital radio techniques also supported data services with data rates as high as 14,400 bits per second (14.4 kbps) in some systems. 2G data communication is typically done using circuit-switched techniques, which are not very efficient for sending packet data such as that sent on the Internet. This inefficiency makes the use of wireless data more expensive f or the end user.

The next step in the evolution is from 2G to 3G, which started in the year 2000. The new key feature of 3G systems is the support of high-speed data services with data rates as high as 2 million bits per second (2 Mbps). Data can be transferred using packet-switching techniques rather than the circuit-switching approach. Therefore, it is more efficient and less expensive. This opens up the possibility of cost-effective Internet access, access to corporate intranets, and a host of multimedia services.

If you want to read more about the evolution of wireless networks and WCDMA radio networks in general, please stay tuned for the next several editions where I will go into details.

**Upcoming including but not limited to:**

- Physical layer functions
- W-CDMA Channels
- Basic call setups
- Data session setups
- Service reconfigurations
- UTRAN mobility management
- Inter-system procedures
- RF design & analysis of UMTS radio networks
- The evolution of UMTS
- Architectures

]]>

The odds favor that by the time someone has reached this article, myself included, they have spent at least the briefest of moments (frustratedly?) questioning the practical applications for linear combination, linear independence and linear math. In a sentence, these concepts allow us to mathematically understand and represent multidimensional coordinate systems. If you’re looking for a quick explanation for a homework problem feel free to skim through the bolded topics for help in specific areas of concern. Otherwise, here’s something to think about. Imagine maneuvering in three dimensional space. An instantaneous position can be described using a three dimensional coordinate system. When following a consistent pattern of movement, an instantaneous position can be described with a fourth dimension, time. Suppose you have just landed the snowball throw of a lifetime and hit a target moving across your view plane, increasing the distance between you, and uphill. You have properly estimated the intersection of two moving objects in four dimensions. This is not always an easy task to execute. Now make this throw using a fifth dimension. Most people can’t comprehend the existence of a fifth dimension without having to understand how to maneuver in it. With linear math we can attempt to understand and represent the relationships between these dimensions.

**Linear Independence**

A set of linearly independent vectors {} has ONLY the zero (trivial) solution <> <> for the equation

**Linear Dependence**

Alternatively, if or , the set of vectors is said to be linearly dependent.

By row reducing a coefficient matrix created from our vectors {}, we can determine our <>. Then to classify a set of vectors as linearly independent or dependent, we compare to the definitions above.

**Example**

Determine if the following set of vectors are linearly independent:

, , ,

We need to understand that our vectors can be represented with a system of equations all equaling zero to satisfy the equation from our definition of linear independence. These equations will look something like this:

Notice that I have simply taken the coefficients from the given vectors and multiplied them by four variables (the number of variables will equal the number of vectors in the given set). They have been set equal to zero to allow us to test for linear independence. From here, create a coefficient matrix and perform row operations to reduce the matrix to reduced row echelon form (rref) .

rref =

Finding the solution of the rref matrix may be the more difficult step in this process. However, it may become trivial following **a few simple steps**.

**1) Identify the free variables in the matrix.** Free variables are non-zero and located to the right of pivot variables. Pivot variables are the first non-zero entry in each row and since we have taken the rref of our matrix, all of the pivot variable coefficients are 1. By locating all free variables (or by eliminating all pivot variables) we find that is our only free variable.

**2) Write free variables into your solution. **The variable can be written into our solution vector as itself but we will represent it with another variable name (i.e. ) so that our solution is in parametric form. Multiple free variables are represented with multiple variables names (i.e. ). After this step your solution vector should look like this: <> <>.

**3) Solve for pivot variables. **The pivot variables should either be constant (i.e. 0, 6) or a function of your free variables (i.e. ). From the rref matrix we can see that , , and .

**4) Complete the solution vector. **Placing the values we just calculated into our solution vector: <> <>

**Finally,**

Since not all of our , the given set of vectors is said to be linearly dependent. The linear dependence relation is written using our solution vector multiplied by the respective vector from the given set: . We can also conclude that any vectors with non-zero coefficients are linear combinations of each other. Therefore, and are a linear combination.

]]>

Any op-amp worth its salt has a differential amplifier at its front end, and you’re nobody if you can’t design one yourself. So, this article presents a general method for biasing and analyzing the performance characteristics of single-stage BJT and MOSFET differential amplifier circuits. The following images show the *general *schematic for both kinds of differential amplifiers, often referred to as a **differential input stage** when used in designing op-amps. Notice that these types of differential amplifiers use **active loads **to achieve *wide swing* and *high gain*.

Due to design processes and the nature of the devices involved, BJT circuits are “simpler” to analyze than their FET counterparts, whose circuits require a few extra steps when calculating performance parameters. For this reason, this tutorial will begin by biasing and analyzing a BJT differential amplifier circuit, and then will move on to do the same for a FET differential amplifier. But it should be noted that **the procedures to analyze these types of differential amplifiers are virtually the same.**

The first thing needed is to configure the DC biasing. To accomplish this, a practical implementation of must be developed. A very popular method is to use a **current mirror**. A simple current mirror is shown below:

It is easy to understand how a current mirror works. Observe the equation governing the amount of collector current in a BJT, denoted :

**where:**

- is the collector current
- is the scale current
- is the DC voltage across the base-emitter junction
- is the thermal voltage, typically 25 mV
- is the quality factor, typically between 1- 2 and is frequently assumed to be 1
- is the voltage across the collector-base junction
- is the early voltage

**Note: [**This equation may look intimidating at first, but what is important to understand is that the point of designing “by hand” is to *get close.* One should aim simply to get a good *estimation *of such parameters as necessary bias current, gain, input impedance, etc. In this way, computer simulations can analyze the hand-designed circuit in much closer detail, which greatly aids in the process of designing a real-life differential amplifier. Knowing this, the equations to be used in this tutorial will be rough estimates, but are still invaluable when it comes to designing these types of circuits.**]**

By assuming a very large equivalent resistance, one can estimate that the collector current through any BJT can be described by:

What can be noticed here is that the only controllable variable in that equation is . All the other terms in the equation are constants that depend on either the environment or the actual physical size of the device. This means that for any two same-sized transistors, the currents through their collectors *will be the same as long as the voltage across their base-emitter junctions is the same. *By tying their bases and emitters together, we can mirror the currents between them! In order to implement a successful current mirror, one transistor (here, ) must have a current induced in it to mirror it to the differential amplifier’s current source (here, ). After adding this current mirror to our BJT differential amplifier, the resulting schematic is:

In order to properly bias this circuit, it is necessary to include . Two things are accomplished by including in our circuit. One of them is that we can induce the current in , and thus, the current in . The other important thing this resistor does is drop a majority of the available voltage across itself, so that doesn’t have the entire voltage difference between the supplies across it! To bias this circuit, the first thing one must do is determine what the desired magnitude of the current source will be. This parameter depends on how you want the circuit to operate, and is usually a known value. In this tutorial, we will assume we want an of 1mA. In order to determine the necessary size of , we analyze the loop that consists of:

Kirchoff’s Voltage Law (KVL) around this loop reveals:

These kinds of circuits are typically supplied rails of to . So, **this tutorial will assume:**

.

For a given technology, all of the BJT transistors **are designed to have the same turn-on voltage.** This tutorial will assume .7 V for each BJT. That being the case, and rearranging the above equation, results in:

By introducing a resistor of to the above schematic, the bias current is now established at 1 mA. Due to symmetry, the currents through transistors and are each half of the bias current, described by:

Now that we know the collector currents through and , characterizing the performance of this differential amplifier is a breeze. Since the parameters we are interested in (gain, CMRR, etc) are *small-signal* parameters, the *small-signal* model of this circuit is needed. To obtain this, a nice trick is to “cut the amplifier in half” (lengthwise, such that you only analyze the output side of the amplifier) to obtain:

**Note: [**even though the output signal is single-ended here, the output is still a result of the entire input signal, and not just half of it. This is because the small-signal changes in the currents flowing through are impeded from traveling down the branches controlled by current sources . Also note that the connections between and the voltage-controlled current source (VCCS) indicate that the voltage that controls the VCCS is the voltage across . This is because the resistance in the emitter of these transistors has been omitted, due to its typically small value (10 to 25 ). In addition to this, is assumed to be a small signal (AC) open-circuit. The frequency response has also been omitted, and the amplifier is assumed to be unilateral.**]**

It is simple to see that (the small-signal output voltage) is equal to the current across the parallel combination of the resistors and multiplied by the size of the same parallel combination. Since we know the value of the current through this combination is equal to the input voltage multiplied by (the *transconductance *parameter):

The transconductance parameter is a ratio of *output current *to *input voltage.* It is described mathematically as:

and can be solved for thusly:

In this example, is .5 mA and is 25 mV. With these values, we compute:

Now that the transconductance parameter is known, the only other values needed to compute the differential mode gain are and . is an npn transistor, while is a pnp transistor, so they will not have the same small-signal resistance, but the procedure to find these two values are nearly identical. The following equation describes the small-signal output resistance of any BJT:

The parameter is typically given, and in this tutorial:

Which would result in:

and

Now that the small-signal resistances are known, along with the transconductance parameter, the differential mode gain () may be calculated:

or, in decibels (dB):

The differential input impedance of a differential amplifier **is the impedance a “seen” by any “differential” signal. **A “differential signal” is any and all signals that *aren’t shared by * and . For instance, if:

and

then the common mode signal and differential mode signals are:

and

To find the differential input impedance, begin by following the loop consisting of:

, as illustrated below:

We see that, in the differential signal mode, the path to ground only consists of *of each input transistor.* Since this is the case, the differential mode input impedance of any BJT diff-amp may be expressed as (**omitting emitter resistance and assuming** ** matched**):

where:

(current gain factor)

A typical value for is 100, and knowing allows one to compute:

So, for the BJT differential amplifier in this tutorial, the **differential mode input impedance** is:

The CM gain () is the “gain” that common mode signals “see,” or rather, is the *attenuation applied to signals present on both differential inputs.* A good op amp attempts to eliminate all common mode signals, but this is obviously not possible in the real world. However, one may compute the common mode gain by “cutting the amplifier in half” by observing one of the loops in the following diagram. The path differs from that of differential signals because common mode signals make it so that the two signal sources don’t “see” each other. Notice:

We choose a loop and draw the small-signal model to obtain:

Similar to the output voltage of the differential mode small signal model, we can see that is the voltage across . We also know the current running through this resistance, and may equate the output voltage to:

This time, though, isn’t distributed entirely over the resistances at the base. Instead, a fraction of the input common mode input signal is across the base-emitter junction. Referring back to the small signal model, we see that the loop composed of:

reveals that:

but is negligible compared to the current supplied by the collector, so we say:

which we use to solve for :

Which we then plug back into the equation for :

From this we can solve directly for the common mode gain:

Here, the **common mode gain** is:

The common-mode input impedance *is the impedance that common-mode input signals “see.”* One can analyze the common mode input impedance () by, again, “cutting the differential amplifier in half” and analyzing one side the resulting schematic, assuming a common mode signal. This can be found by observing the figure 6, above.

Choosing one of these paths, we construct the corresponding small-signal model for common mode signals (**assuming **), which is shown in figure 7. From this figure, deriving is simple. Notice the currents flowing in the loop that consists of:

from this loop, one may compute:

which is used to find an equation for

and since:

and

So:

which is the same as:

which can be rearranged for:

where:

Which, in this tutorial, results in:

The common mode rejection ratio (CMRR) is simply a ratio of the differential mode gain to the common mode gain, and is defined as:

Here, the CMRR is:

As stated before, the analysis of these performance parameters are done virtually the same for FET diff amps as they are for BJT diff amps. There are, however, a few key differences. For one, all BJT transistors are typically built to be the same size on a given IC device. But for an IC device that uses FETs, this is not the case. Each FET has an adjustable length and width that affects how much current it will pass for a given voltage-drop across the device. In fact, observe the equation for the *drain current *in a FET:

From this, the gate-source voltage is:

where:

- is the process conductivity parameter, and is equal to:

, which is the electron mobility multiplied by the oxide capacitance

- are the width and length of the device, respectively
- is the gate-to-source voltage
- is the threshold voltage of the FET

Analyzing BJTs in a circuit is more simple because all base-emitter voltages are assumed to be equal. But this is not the case for mosfets, and one must analyze the above equation (or others) to find device voltages. But there is the threshold voltage – the minimum gate-to-source voltage that will allow for any conduction whatsoever. The threshold voltage is a result of the FET fabrication process, and is typically provided on datasheets for each FET gender.

For a differential amplifier composed of FETs to work, it is imperative that all the FETs be in **saturation mode**. For a FET to be in saturation implies:

So this must be checked when analyzing these types of circuits.

Another important difference is the derivation of the transconductance parameter, . When analyzed for a BJT, it was defined as the ratio of the change in collector current to the change in the base-emitter voltage. For a FET there is a similar procedure, as the transconductance is defined as the ratio of the change in drain current to the change in gate-source voltage. Mathematically, the transconductance parameter is:

The last notable difference is the computation for a FET’s small-signal resistance. The equation describing is:

where is the channel-length modulation parameter.

From this little discussion, you should be able to apply the principles used to analyze the BJT differential amplifier to the analysis of a FET-based differential amplifier. But, of course, if you would like to see a FET differential amplifier explained in more detail, do not hesitate to ask a question!

This post was created in March 2011 by Kansas State University Electrical Engineering student Safa Khamis. A million thank yous extended to Safa for taking the time to document this important process for everyone else to learn from. Please leave questions, comments, or ask a question in the questions section of the website.

Amongst the concepts that cause the most confusion to electrical engineering students, the Convolution Integral stands as a repeat offender. As such, the point of this article is to explain what a convolution integral is, why engineers need it, and the math behind it.

In essence, the “convolution” of two functions (over the same variable, e.g. and ) is an operation that produces a separate third function that describes how the first function “modifies” the second one. Conversely, the resulting function can be seen as how the second function “modifies” the first function. Sometimes the result is used to describe how much the first two functions “have in common.” In all honesty, the concept of the convolution of two functions is quite abstract, but the frequency at which it appears in nature grants its importance to scientists and engineers. Ultimately the aim here is to identify its use to electrical engineers – so for now do not dwell solely on its mathematical significance.

A convolution of two functions is denoted with the operator ““, and is written as:

Where is used as a “dummy variable.” To aid in understanding this equation, observe the following graphic:

Before diving any further into the math, let us first discuss the relevance of this equation to the realm of electrical engineering.

Most electrical circuits are designed to be *linear, time-invariant *(LTI) systems. Being “linear” implies that the magnitude of a circuit’s output signal is a **scaled **version of the input signal’s magnitude. Further, an LTI system that is excited by two independent signal sources will output the **sum **of the **scaled **versions of each signal. This is extended for an infinite number of independent signal sources, and gives rise to the concept of *superposition*. Put in another way, if a function causes an LTI system to output , then:

Where is a multiplicative constant. In addition to this, superposition allows us to say:

Being a “time-invariant” system means *it does not matter when the input signal is applied* – a *specific *input signal will always result in *the same *output signal for a given LTI system. Put mathematically, time-invariance can be expressed as:

where can be viewed as a time delay when dealing with signals through time (i.e. “time-domain signals”). Though not directly, this concept also signifies that *an output signal cannot contain frequency components not inherent in the input signal (*causality).

The vast majority of circuits are LTI systems, each with a specific *impulse response. *The “impulse response” of a system is a system’s output when its input is fed with an *impulse signal* – a signal of infinitesimally short duration. A real-world “impulse signal” would be something like a lightning bolt – or any form of ESD (electro-static dischage). Basically, any voltage or current that spikes in magnitude for a *relatively* short period of time may be viewed as an impulse signal. The impulse response of a circuit will always be a time-domain signal, and exists because no signal can propagate through a circuit in zero time; each individual electron involved can only move so quickly through each component. Typically, real-world electronic LTI systems exhibit an impulse response that consists of an initial spike in magnitude, followed by an everlasting and ever-decreasing exponential relationship in signal magnitude. The following image describes this graphically.

So, here’s the big deal: the fact that each LTI circuit has a specific impulse response function (here, referred to as ) is very useful in predicting its behavior given a particular input signal (here, referred to as ). This is because the input signal itself may be viewed as an *impulse train – *a stream of continuous impulse functions, with infinitesimally short durations of time between each impulse. This fact, along with superposition, allows one to find the output of an LTI system given an arbitrary input signal *by summing the LTI system’s impulse response to each impulse function that make up the input signal.* By allowing the time between each “impulse” of the input signal to go to zero, this approach can be used to determine the output time-domain signal of an LTI system for any time-domain input signal. For example, the following graphic shows the output of an RC circuit when fed with a square pulse:

What is seen here is the integral of the impulse response and the input square wave *as the square wave is stepped through time.* In the above convolution equation, it is seen that the operation is done with respect to , a dummy variable. In reality, we are taking an input signal, flipping it vertically through the origin (not evident with a square wave), and determining what the integral is at each value of , which here is *delay through time.* Since the output of any LTI system is non-causal (meaning it cannot exist until the signal that excites the output has been applied), we must mathematically step through time to see how each impulse signal of the input affects the LTI system’s impulse response – again, achieved by stepping through – the “time-delay” dummy variable.

To see how the convolution integral can be used to predict the output of an LTI circuit, observe the following example:

For an LTI system with an impulse response of , calculate the output, , given the input of:

The output of this system is found by solving:

We only integrate between 0 and + because, if we define as the time that the input signal is applied, then both and have zero magnitude at any time .

From there, we calculate:

Next, we can simplify and compute the integral:

Since for all , we can write the output as:

This result *describes the output function for an LTI system with an impulse response * *when fed the input signal *.

Often, one may wish to compute the convolution of two signals that can’t be described with one function of time alone. For arbitrary signals, such as pulse trains or PCM signals, the convolution *at any time t* can be computed graphically. For signals *whose individual “sections” can be described mathematically*, follow these steps to perform a convolution:

1.) Choose one of the two funtions ( or ), and leave it fixed in -space.

2.) Flip the *other *function vertically across the origin, so that it is *time-inverted*.

3.) Shift the inverted signal through the axis by seconds. Choose to shift the signal to the first “section” of the fixed function that is described by the same equation. The inverted signal (say, ), now shifted, represents , which is basically a “freeze frame” of the output after the input signal has been fed to the LTI system for seconds.

4.) The integral of the two functions, after shifting the inverted function by seconds, is the value of the convolution integral (i.e. output signal) at .

5.) Repeat this procedure through all “sections” of the function fixed in -space. By doing this, you can compute the value of the output at any time !

The following is a list of useful properties of the convolution integral that can help in developing an intuitive approach to solving problems:

1.) Commutative Property:

2.) Distributive Property:

3.) Associative Property:

4.) Shift Property:

if

then

5.) Convolution with an Impulse results in the original function:

where is the unit impulse function

6.) Width Property:

*The convolution of a signal of duration ** and a signal of duration * *will result in a signal of duration*

Finally, here is a Convolution Table that can *greatly *reduce the difficulty in solving convolution integrals.

Thank you so much to Safa Khamis @ Kansas State University for taking the time to write this tutorial for Engineersphere and the electrical engineering community.

]]>

Finding the inverse of a matrix is much more complex than finding the inverse of a number. All real numbers have an inverse (i.e. ). However, not all matrices have an inverse. There are several characteristics that allow us to visibly determine whether a matrix has an inverse but we will only focus on one. A matrix must be square (i.e. 2×2, 3×3, etc.) to have an inverse. Performing the following manipulations will be a waste of time if a matrix is not square. It is also important to know the inverse matrix property. Using my example above, and similarly with matrices, where In is the identity matrix (diagonal from top left to bottom right contains all 1’s, and everything else is 0) . We take advantage of this property when solving systems of matrices.

In words, the general algorithm for determining the existence of an inverse matrix is to manipulate the matrix into row reduced echelon form (rref). If the rref matrix is an identity matrix, then the inverse matrix exists. Hang on now, earlier I mentioned that there were other, visible characteristics that allow us to determine the existence of an inverse matrix, but now I’m asking you to perform a tedious process (without a calculator) with the same goal? Wouldn’t it be easier to first determine if finding the rref of the matrix is worthwhile? You’re right, except we are going to make a simple manipulation, and at the same time that we finish our rref process and determine that an inverse matrix exists, we will have found the inverse matrix! How do we do that? We will create an augmented matrix between our matrix in question, , and the appropriate identity matrix where the size of matrix is equal to the size of matrix . We will perform the same rref process to the augmented matrix . If the portion of our augmented matrix previously belonging to matrix reduces to an identity matrix (indicating the existence of ), then the portion previously belonging to the identity matrix, will equal .

Now, for the math…

Suppose we are asked to find the inverse of the following matrix:

First, we must set up the augmented matrix discussed above. Notice that I have simply placed the identity matrix (of the same size as ) on the right of matrix .

Next, we will attempt to find the rref of the augmented matrix. If the portion of the augmented matrix previously belonging to yields an identity matrix, is invertible.

rref =

Ok great! The left half of our augmented matrix reduced to an identity matrix. That means two things to us: the matrix has an inverse *and* we’ve already found the inverse. If you recall from above, is the right half of the augmented matrix (after finding it’s rref, of course). So we can conclude:

If our rref of the augmented matrix had yielded anything other than an identity matrix, we would conclude that does not exist. This method will simply allow us to determine the existence of and entries to for any size matrix.

]]>Typically, we would divide by to solve for , however there is no method for performing division between matrices. By taking advantage of the inverse matrix property , we can simply the formula to solve for the column vector . The commutative property does not apply in matrix multiplication so . *Therefore we have have to be aware of the ‘order’ in which we multiply*:

simplifies to

Notice that since we multiplied by ‘first’ on the left side of the equation, we also multiply ‘first’ on the right side. Now, multiplying the inverse of matrix by matrix will yield a column vector matching our , , and . Below, I have used the equation and plugged the values for into the equation. The product between and is shown on the far right. Note: This article assumes you know how to find the inverse of a matrix. This process is described in my article Finding The Inverse of a Matrix.

Therefore, , , and . Simple systems (i.e. this 3×3 system) are much easier to solve with algebra instead of finding the inverse of the coefficient matrix and performing matrix multiplication. This application is more practical for larger systems or while working on Matrix Theory homework.

**Please leave comments by signing in and then clicking on the “sticky note” located in the top right corner of this post to show your appreciation to the author!**