Register for our Complete Altium Video Tutorial Series
For the full resolution version of this video, feel free to check it out directly on youtube.
]]>This video series will introduce Xilinx System Generator and cover the basic principles of the design flow. The video will also contrast the System Generator design flow with typical HDL-only design flow.
The full video series (still under development) consists of several parts:
The video outline is located below the video.
Overview
Creating a new design using MATLAB Simulink
Tour of the Xilinx block set for Simulink
Creating a New Design Using MATLAB Simulink
Open up the design environment using the Xilinx shortcut
Don’t start MATLAB directly.
Create new MDL file
Change preferences to discrete, fixed-step simulator
Xilinx Block Set for Simulink
Every System Generator design must contain a System Generator tolken
Almost every design will need in and out gateways.
Tour of Xilinx Blocksets
]]>
Register for our Complete Altium Video Tutorial Series
For the full resolution version of this video, you can check it out on youtube. Also, those of you that are interested in the exact parts used in this video, take a look at the bill of materials listed below.
]]>
This video series will introduce Xilinx System Generator and cover the basic principles of the design flow. The video will also contrast the System Generator design flow with typical HDL-only design flow.
The full video series (still under development) consists of several parts:
The video outline is located below the video.
Overview
What is Xilinx System Generator
Design flow using system generator
Why use system generator for DSP designs
Design outputs from system generator
What Is Xilinx System Generator
Xilinx System generator is a design methodology for creating digital signal processing designs for FPGA.
Integrates MATLAB/Simulink with Xilinx design tools to target Xilinx FPGAs
Design Flow
Compare Traditional HDL vs System Generator
Typical HDL design flow
Create floating point algorithm models
Verify floating point algorithm operation in MATLAB
Implement fixed point design in Verilog or VHDL (create floating point design)
Verify HDL outputs in MATLAB (re-verify algorithm)
Target FPGA and deploy
System generator design flow
Create floating point test signals
Create FPGA design using Xilinx blocks in Simulink
Simulate design and verify outputs
Target FPGA and deploy
System Generator Design Environment
Why Use System Generator for DSP Designs
Reduction of algorithmic design to implementation time
Design, simulate, verify and target FPGA in one step
Outputs
Simulink simulation
Timing, resources, and power
Implementation files (bit, .ngc, etc)
Hardware co-simulation using Xilinx development boards (or custom)
Register for our Complete Altium Video Tutorial Series
To view this video in full size – head over to youtube.
]]>
The FFT results can be confusing for beginners, and it may not be obvious how frequency is distributed along the axis. Additionally, if you don’t pay close attention to the size of the FFT and the sampling frequency, your signal of interest may not even appear in your spectrum.
This article will show how to generate a frequency axis for the FFT in MATLAB. It will also discuss the relationship between sampling frequency and FFT size, as well as how to make sure your signals of interest are properly plotted.
For the sake of this article, I’m going to assume that the time-domain signal is at least Nyquist sampled, meaning that the sampling frequency is at least twice the frequency of the maximum frequency in the time-domain signal.
The general rule of thumb is to make sure your highest frequency content in your signal of interest is no more than 40% of your sampling frequency.
The FFT shows the frequency-domain view of a time-domain signal in the frequency space of -fs/2 to fs/2, where fs is the sampling frequency. This frequency space is split into N points, where N is the number of points in the FFT. The spacing between points in frequency domain is simply expressed as:
Using this equation, a frequency vector can be built in MATLAB with the following code:
f=fs/2*[-1:2/nfft:1-2/nfft];
The point +fs/2 is not part of the vector since zero frequency (DC) must be covered as well.
Choosing the Right FFT Size
Considering an example waveform with a 1 V-peak sinusoid at 1.05 MHz, let’s start exploring this concept.
Let’s start off by thinking about what we should expect to see in a power spectrum. Since the sinusoid has 1 Vpeak amplitude, we should expect to see a spike in the frequency domain with 10 dBm amplitude at 1.05 MHz.
If we use a 2048-point FFT to analyze the signal, we get the following power spectrum:
Although we’ve picked a nice power of two for the FFT, the spectrum doesn’t give the expected results. The closest points in our FFT are 976.5 kHz and 1074.2 kHz, which correspond to the 10th and 11th FFT bins, respectively. That means that our true peak at 1050 kHz is not plotted!
If we change our FFT size to a number that divides 1.05 MHz more closely to an integer, we’ll see a nice peak at 10 dBm. Let’s choose 4000 points instead:
Now we see a peak at the correct frequency with the correct amplitude! FFT bin 42 corresponds to the frequency 1.05 MHz.
Who wants to look at jagged plots? Not me. Seriously. If I’m going to present this sort of figure in a presentation, why not make it look a little prettier with virtually no effort?
This article uses a small number points to plot the frequency domain signals. Even though there is no true frequency resolution gained by padding the time-domain signal, you can see the FFT’s characteristic sinc function by making the size of the FFT much larger the size of your time-domain signal. Here is a plot of the same signal using 32k points.
I want to stress that even though there is more fidelity to the plot, the resolution has not been improved. The space between the sinc function nulls, and thus the FFT bin size, has not increased.
If you would like a better understanding of how to increase the true resolution of the FFT, see my other article about FFT resolution and FFT zero padding. If you’re looking for the correct way to scale the frequency domain data into a power spectrum or power spectral density in MATLAB, see those links as well.
Thanks for reading!
We want to hear from you! Do you have a comment, question, or suggestion? Twitter us @bitweenie or me @shilbertbw, or leave a comment right here!
Now that you know what they are, should you start sprinkling these into your designs? Unfortunately, most designers just throw them in without considering if the benefits outweigh the additional costs and complexity they can add to a given design. Let’s take a moment to examine when the use of guard traces is appropriate.
Under certain conditions, guard traces can reduce crosstalk between adjacent traces by an order of magnitude. However, achieving this degree improvement is typically only possible on designs without solid ground planes. If you’re using a classical stackup, then you already have a ground plane in your design, which, in a homogenous digital system, provides nearly all the benefits of guard traces. If your design contains analog circuitry (particularly high power circuits) or if you’re mixing logic families (like TTL and ECL) guard traces may still be beneficial.
As an example, on a PCB with adjacent traces separated by 40 mils (centerline distance) and a dielectric thickness of 5 mils separating the traces above the ground plane, the crosstalk will measure less than 2%. In a design utilizing a common logic family (all TTL or all ECL for example) this level of crosstalk will not affect performance and guard traces buy you pretty much nothing. If you’re mixing logic families though, this level of crosstalk may be problematic. Simply examine the noise margin of your design to determine whether or not 2% crosstalk is enough to upset your lower voltage swing logic components.
Much of PCB design is trial and error, so what if you have a fabricated design that is exhibiting unacceptable levels of crosstalk between adjacent traces? It is always best to go back to first principles and try to understand the coupling mechanism. With that said, inserting a guard trace may help. The rule of thumb is that inserting a guard trace grounded at regular intervals (such as the one shown in the figure above) between signal traces will reduce crosstalk to approximately 25% of it’s current level (a substantial change to be sure!).
Register for our Beginner PCB Video Tutorial Series
Guard traces are just one tool in a PCB designer’s toolbox to reduce crosstalk. As with any tool, you must know how and when it is appropriate to use it. When a design exhibits substantial noise problems, it is always best to try to understand the underlying noise source. Under certain conditions though, guard traces can be very effective in reducing crosstalk.
]]>
This article will explore zero-padding the Fourier transform–how to do it correctly and what is actually happening. The exploration will cover of the following topics:
Zero padding is a simple concept; it simply refers to adding zeros to end of a time-domain signal to increase its length. The example 1 MHz and 1.05 MHz real-valued sinusoid waveforms we will be using throughout this article is shown in the following plot:
The time-domain length of this waveform is 1000 samples. At the sampling rate of 100 MHz, that is a time-length of 10 us. If we zero pad the waveform with an additional 1000 samples (or 10 us of data), the resulting waveform is produced:
There are a few reasons why you might want to zero pad time-domain data. The most common reason is to make a waveform have a power-of-two number of samples. When the time-domain length of a waveform is a power of two, radix-2 FFT algorithms, which are extremely efficient, can be used to speed up processing time. FFT algorithms made for FPGAs also typically only work on lengths of power two.
While it’s often necessary to stick to powers of two in your time-domain waveform length, it’s important to keep in mind how doing that affects the resolution of your frequency-domain output.
There are two aspects of FFT resolution. I’ll call the first one “waveform frequency resolution” and the second one “FFT resolution”. These are not technical names, but I find them helpful for the sake of this discussion. The two can often be confused because when the signal is not zero padded, the two resolutions are equivalent.
The “waveform frequency resolution” is the minimum spacing between two frequencies that can be resolved. The “FFT resolution” is the number of points in the spectrum, which is directly proportional to the number points used in the FFT.
It is possible to have extremely fine FFT resolution, yet not be able to resolve two coarsely separated frequencies.
It is also possible to have fine waveform frequency resolution, but have the peak energy of the sinusoid spread throughout the entire spectrum (this is called FFT spectral leakage).
The waveform frequency resolution is defined by the following equation:
where T is the time length of the signal with data. It’s important to note here that you should not include any zero padding in this time! Only consider the actual data samples.
It’s important to make the connection here that the discrete time Fourier transform (DTFT) or FFT operates on the data as if it were an infinite sequence with zeros on either side of the waveform. This is why the FFT has the distinctive sinc function shape at each frequency bin.
You should recognize the waveform resolution equation 1/T is the same as the space between nulls of a sinc function.
The FFT resolution is defined by the following equation:
Considering our example waveform with 1 V-peak sinusoids at 1 MHz and 1.05 MHz, let’s start exploring these concepts.
Let’s start off by thinking about what we should expect to see in a power spectrum. Since both sinusoids have 1 Vpeak amplitudes, we should expect to see spikes in the frequency domain with 10 dBm amplitude at both 1 MHz and 1.05 MHz.
The original time-domain signal shown in the first plot with a length of 1000 samples (10 us). A 1000-point FFT used on the time-domain signal is shown in the next figure:
Two distinct peaks are not shown, and the single wide peak has an amplitude of about 11.4 dBm. Clearly these results don’t give an accurate picture of the spectrum. There is not enough resolution in the frequency domain to see both peaks.
Let’s try to resolve the two peaks in the frequency domain by using a larger FFT, thus adding more points to the spectrum along the frequency axis. Let’s use a 7000-point FFT. This is done by zero padding the time-domain signal with 6000 zeros (60 us). The zero-padded time-domain signal is shown here:
The resulting frequency-domain data, shown as a power spectrum, is shown here:
Although we’ve added many more frequency points, we still cannot resolve the two sinuoids; we are also still not getting the expected power.
Taking a closer look at what this plot is telling us, we see that all we have done by adding more FFT points is to more clearly define the underlying sinc function arising from the waveform frequency resolution equation. You can see that the sinc nulls are spaced at about 0.1 MHz.
Because our two sinusoids are spaced only 0.05 MHz apart, no matter how many FFT points (zero padding) we use, we will never be able to resolve the two sinusoids.
Let’s look at what the resolution equations are telling us. Although the FFT resolution is about 14 kHz (more than enough resoution), the waveform frequency resolution is only 100 kHz. The spacing between signals is 50 kHz, so we are being limited by the waveform frequency resolution.
To resolve the spectrum properly, we need to increase the amount of time-domain data we are using. Instead of zero padding the signal out to 70 us (7000 points), let’s capture 7000 points of the waveform. The time-domain and domain results are shown here, respectively.
The resulting frequency-domain data, shown as a power spectrum, is shown here:
With the expanded time-domain data, the waveform frequency resolution is now about 14 kHz as well. As seen in the power spectrum plot, the two sinusoids are not seen. The 1 MHz signal is clearly represented and is at the correct power level of 10 dBm, but the 1.05 MHz signal is wider and not showing the expected power level of 10 dBm. What gives?
What is happening with the 1.05 MHz signal is that we don’t have an FFT point at 1.05 MHz, so the energy is split between multiple FFT bins.
The spacing between FFT points follows the equation:
where nfft is the number of FFT points and fs is the sampling frequency.
In our example, we’re using a sampling frequency of 100 MHz and a 7000-point FFT. This gives us a spacing between points of 14.28 kHz. The frequency of 1 MHz is a multiple of the spacing, but 1.05 MHz is not. The closest frequencies to 1.05 MHz are 1.043 MHz 1.057 MHz, so the energy is split between the two FFT bins.
To solve this issue, we can choose the FFT size so that both frequencies are single points along the frequency axis. Since we don’t need finer waveform frequency resolution, it’s okay to just zero pad the time-domain data to adjust the FFT point spacing.
Adding an additional 1000 zeros (10 us) to the time-domain signal gives us a spacing of 12.5 kHz, and both 1 MHz and 1.05 MHz are integer multiples of the spacing. The resulting spectrum is shown in the following figure.
Now both frequencies are resolved and at the expected power of 10 dBm.
For the sake of overkill, you can always add more points to your FFT through zero padding (ensuring that you have the correct waveform resolution) to see the shape of the FFT bins as well. This is shown in the following figure:
Three considerations should factor into your choice of FFT size, zero padding, and time-domain data length.
1) The waveform frequency resolution should be smaller than the minimum spacing between frequencies of interest.
2) The FFT resolution should at least support the same resolution as your waveform frequency resolution. Additionally, some highly-efficient implementations of the FFT require that the number of FFT points be a power of two.
3) You should ensure that there are enough points in the FFT, or the FFT has the correct spacing set, so that your frequencies of interest are not split between multiple FFT points.
One final thought on zero padding the FFT:
If you apply a windowing function to your waveform, the windowing function needs to be applied before zero padding the data. This ensures that your real waveform data starts and ends at zero, which is the point of most windowing functions.
Thanks for reading!
We want to hear from you! Do you have a comment, question, or suggestion? Twitter us @bitweenie or me @shilbertbw, or leave a comment right here!
The classic 4-layer PCB stackup includes two routing layers and two internal planes, one for ground and the other for power.
Adhering to this stackup, including the core and prepreg heights shown, while utilizing FR-4 PCB material and 1 oz copper, the properties of 50 Ohm traces on the routing layers are provided in the table below.
Layer | 50 Ohm Trace Width |
---|---|
Top Layer | 0.017" |
Bottom Layer | 0.017" |
The 6-layer PCB classic stackup includes four routing layers (two outer and two internal) and two internal planes (one for ground and the other for power).
Again, adhering to this stackup while utilizing FR-4 PCB material and 1 oz copper, the properties of 50 Ohm traces are provided in the table below.
Layer | 50 Ohm Trace Width |
---|---|
Top Layer | 0.0170" |
Internal Routing Layers | 0.0065" |
Bottom Layer | 0.0170" |
Register for our Beginner PCB Video Tutorial Series
Remember, these stackups aren’t for every design. For example, high speed designs will typically always keep power and ground planes on adjacent layers for decoupling and designs that require low electromagnetic emissions may need to utilize additional ground planes for shielding. One final practicality to consider, always route traces perpendicular on adjacent routing layers, e.g., on the 6-layer stackup, side to side on Internal Routing Layer 1 and top to bottom on Internal Routing Layer 2. This technique increases routing efficiency and also minimizes crosstalk.
]]>
The catch is that concepts in electrical engineering are sometimes pretty difficult to visualize. For example, how do you visualize propagating waves interacting with an antenna surface? That’s pretty tough.
What about negative frequencies? Maybe you intuitively think about frequency as the rate of repetition. That’s not wrong, but it’s not the whole picture; a frequency spectrum has a negative and positive frequency axis. What could it possibly mean to have a negative repetition?
This article will discuss some concepts about the frequency spectrum, negative frequencies, and complex signals.
Remembering that physically, sinusoids are waves, the sign of the frequency represents the direction of wave propagation. Simply put, negative frequencies represent forward traveling waves, while positive frequencies represent backward traveling waves.
This sign relation is by convention. Electrical engineers define wave propagation as the following:
It’s important to note that not all fields of science define wave propagation in the same sense. The most notable case is for physics. Physicists define wave propagation in exactly the opposite sense, with positive frequencies propagating forward.
The following two spectrums show the location of a forward-traveling and backward-traveling wave, respectively.
Using a forward-propagating complex sinusoid as example, moving the sign from it’s typical location to in front of the frequency symbol illustrates the concept.
Maybe you’re used to only modeling signals with trigonometric functions, like sine and cosine. So what happens when plot the spectrum of a cosine?
You might be saying to yourself, “I didn’t specify a direction of propagation. It’s just a cosine!”. You actually did, and to prove it, let’s look at Euler’s equation:
Euler’s equation shows how a complex exponential is related to both the sine and cosine functions. The direction of propagation is much more obvious when looking at the complex form of sinusoids.
Keeping in mind the original cosine function that we plotted at the beginning of this section, let’s look at the cosine using Euler’s equation.
Now it’s obvious that the cosine function splits the energy equally between forward and backward traveling waves!
So what does it all mean, and how do I relate all this to real signals traveling down wires?
Whether or not you should care about positive and negative frequencies depends on how you got your time-domain data and your application.
If your samples are just real numbers, then you can ignore half the spectrum and double just one side. For real-sampled data, the spectrum is symmetrical, so it doesn’t matter which side you use. If your samples are complex (I and Q), you need both sides of the spectrum since the spectrum is asymmetric.
If you are modeling a system in MATLAB, decide whether or not you care about direction of propagation. For applications such as simple real-valued filters, you can just use cosines and look at a single-sided spectrum. If you’re modeling something where propagation and/or reflection matters (e.g. radar system), you should be modeling with complex exponentials. In that case you care about both sides of the spectrum.
For more information about the power spectrum and complex sampling, check out this post on the power spectrum.
In either method (real or complex data), energy is conserved; it’s all in how you look at it mathematically. Complex signals can be represented as sines and cosines; real signals can be represented with equivalent complex representations.
Look up how the Hilbert Transform works and is used. Keep your eyes peeled for a post on the Hilbert Transform in the future.
Special thanks to Santosh for suggesting this topic! You can request your own topic too by clicking the link at the top of the page.
Thanks for reading!
We want to hear from you! Do you have a comment, question, or suggestion? Twitter us @bitweenie or me @shilbertbw, or leave a comment right here!