Blog

DARPA Spectrum Challenge

Announced mid last week, DARPA will be sponsoring a Spectrum Challenge. This is a huge opportunity for the radio field. For years, we have been researching issues of spectrum sharing (or Dynamic Spectrum Access (DSA)). While we've generated a lot of good ideas, we haven't yet seen these ideas take shape in real, provable systems. Too often, the work is a simulation or an experimental test bed with too many controlled parameters.

DARPA, through their Spectrum Challenge, has a chance to change this. Forcing teams to develop and compete against each other as well as other interfering radios means that we have to think about real, unexpected challenges to our ideas. We will have to develop both robust algorithms and robust systems. What will result will almost certainly be a large number of advances in the science, technology, and understanding of the coexistence of radios.

From my perspective as the maintainer of GNU Radio, this is a great opportunity for us, too. While DARPA is not mandating the use of GNU Radio, they are requiring that teams demonstrate competency in our software. And since the final challenge will be done using USRPs, I hope that many teams will continue to use GNU Radio as their platform of choice. As I said, the challenge will not only involve developing robust spectrum sharing algorithms, it will also demand robust platforms. GNU Radio is well-known, well-tested, and has an active, educated community of users, and so is a perfect platform to build upon.

As the head of GNU Radio, I will not be participating directly in the competition. I hope to be able to advise and help all teams as I am able, and I do not want to be biased by any stake I have. Personally, my stake in this competition is the advancement of the science and technology of DSA as well as the opportunity it provides for GNU Radio.

For more details of the chellenge, visit DARPA's website.

Their Q&A page is a good, quick read over the main aspects of the challenge to get up to speed.

Some Really Cool DSP

I finally took the time to really understand the polyphase synthesis filterbank and how to use it to reconstruct previously channelized signals. Once again, I looked to fred harris' work on the subject and have cracked it. The keys are in creating an M-to-2fs PFB channelizer, a 2-to-Pn PFB synthesizer, and perfect reconstruction filters (-6 dB at the edge of the passband).

I plan on writing up a more thurough look into this, and all of the code to do the proper synthesis filter will be going into GNU Radio soon, but for now, some pretty pictures.

First, we start off with a QPSK signal with very little noise:

Here, I split the signal up into 4 equally spaced channels. Channel 0 is around DC (0 Hz) and goes from -1000 - 1000 Hz, channel 1 is from 1000 - 3000 Hz, and channel 3 is from -3000 to -1000 Hz. Now, channel 2 actually spans from 3000 - 4000 and then wraps around to go from -4000 to -3000 Hz. These have a 1000 Hz bandwidth but are sampled at 2 samps/symbol so the channel sample rate is 2000 Hz.

I then go through the 2-to-Pn synthesizer where Pn is actually 8. This synthesis filter takes 4 channels in and produces four channels but at twice the sampling rate, so we now have a signal that is the original signal but sampled at twice the orignal rate. Notice also that it's offset by a bit in frequency in the PSD. That's a result of my plotting that I don't feel like correcting (it's related to the reason why channel 2 spans the edge of the positive and negative spectrum). Obviously, the constellation is not rotating, so we're ok.

Notice also that this signal was first split up, filtered, and then put back together again. Perfectly. That constellation should be enough proof of that. This has some huge implications to what we can do with this concept once I generalize it. This was a simple demonstration to get the algorithm and math correct, but now we can have some serious fun!

For those of you into such things, this is the filter I used for this example:

I generated it using GNU Radio firdes.low_pass_2 filter generator with these parameters:

  • Gain = 1
  • Sample Rate = 8000 Hz
  • Bandwidth = 1000 Hz
  • Transition bandwidth = 200 Hz
  • Attenuation = 80 dB
  • Window: Blackman-harris

 

Volk Benchmarking

Benchmarking Volk in GNU Radio

The intention of Volk is to increase the speed of our signal processing capabilities in GNU Radio, and so there needs to be a way to look at this. In particular, there were some under-the-hood changes to the scheduler to allow us to more efficiently use Volk (by trying to provide only aligned Volk call; see Volk Integration to GNU Radio for more on that). These changes ended up putting more lines of code into the scheduler so that every time a block's work function is called, the scheduler has more computations and logic to perform.

Because of these changes, I am interested in understanding the performance hit taken by the change in the scheduler as well as the performance gain we would get from using Volk. If the hit to the scheduler is less than a few percentage points while the gain from Volk is much larger, then we win.

Performance Measuring

There is a lot of debate about what the right performance measurement is for something like this. In signal processing algorithms, we are interested in looking at the speed that it can process a sample (or bit, symbol, etc.), so a time-based measurement is what we are going to look at. Specifically, how much time does it take a block to process $N$ number of samples?

If we are interested in timing a block, the next question is to ask what clock to use? And if we look into this, everyone has their own opinion on it. There's wall time, but that's suspect because it doesn't account for interruptions by the OS. There's the user and system times, but they don't seem to really represent the time it actually takes a program to produce the output; and do we combine those times or just use one of them? This also really represents a lower bound if no sharing were occurring and with no other system overhead.

In the end, I decided what I cared about, and what our users would care about, is the expected time taken by a block to run. So I'm going with the wall clock method here. Then there's the question of mean, min, or max time? They all represent different ways to look at the results. It is, frankly, easy enough to capture all three measurements and let you decided later which is important (for that matte, it would be an easy edit to the benchmark tools to also collect the user and system time for those who want that info, too).

The results shown in this post simply represent the mean of the wall time for a certain number of iterations for processing a certain number of samples for each block. I am also going to show the results from only one machine here to keep this post relatively short. 

Measurement Tools

I built a few measurement tools to both help me keep track of things and allow anyone else who wants to test their system's performance to do so easily. These tools are located in gnuradio- examples/python/volk_benchmark. It includes two Python programs for collecting the data and a plotting program to display the data in different ways. I won't repost here the details of how to use them. There's a lengthy and hopefully complete README in the directory to describe their use.

Measurement Results

For these measurements, I have two data collection programs: volk_math.py and volk_types.py. The first one runs through all of the math functions that were converted to using Volk and the second runs through all of the type conversions that were 'Volkified.' These could have easily been done as one program, but it makes a bit of logical sense to separate them.

The system I ran these tests on is an Intel quad-core i7 870 (first gen) at 2.93 GHz with 8 GB of DDR3 RAM. It has support for these SIMD architectures: sse sse2 ssse3 sse4_1 sse4_2.

I'm interested in comparing the results of three cases. The first case is the 'control' experiment, which is the 3.5.1 version of GNU Radio which has no edits to the scheduler or the use of Volk. Next, I want to look at the scheduler with the edits but still no Volk, which I will refer to as the 3.5.2 version of GNU Radio. The 'volk' case is the 3.5.2 version that uses Volk for the tests.

The easiest way to handle these cases was to have two parallel installs, one for version 3.5.1 and the other for 3.5.2. To test the Volk and non-Volk version of 3.5.2, I simply edited the ~/.volk/volk_config file and switch all kernels to use the 'generic' version (see the README file in the volk_benchmark directory for more details on this).

For the results shown below, click on the image for an enlarged version of the graph.


Looking at the type conversion blocks, we get the following graph:

Volk Type Conversion Results

Another way to look at the results is to look at the percent speed difference between the 3.5.2 versions and the 3.5.1. So this graph shows us how much increase (positive) or decrease (negative) of speed the two cases have over the 3.5.1 control case.

Percent Improvement Over v3.5.1 for Type Conversion Blocks

 

These are the same graphs for the math kernels.

Volk Math Results
 Percent Improvement Over v3.5.1 for Math Blocks

There are two interesting trends here. The most uninteresting one is that Volk generally provides a massive improvement in the speed of the blocks, the more complicated the block (like complex multiplies) the more we gain from using Volk.

The first really interesting result is the improvement in speed between the schedulers from 3.5.1 and 3.5.2. As I mentioned earlier, we increased the number of lines of code in the scheduler that make calculations and logic and branching calls. I expected us to do worse because of this. My conjecture here is that by providing mostly aligned blocks of memory, there is something happening with data movement and/or the cache lines that is improved. So simply aligning the data (as much as possible) is a win even without Volk.

The other area this interesting is that in the rare case, the Volk call comes out to be worse than the generic and/or the v3.5.1 version of the block. The only math call where this happens is with the conjugate block. I can only assume that conjugating a complex number is so trivial (the sign flip of the imaginary part) that the code for it is highly optimize already. We are, though, talking about less than 5% hit on the performance, though. On the other hand, the multiply conjugate block, which is mostly when the conjugate is used in signal processing, is around 350% faster.

The complex to float conversion is a bit more of a head scratcher. Again, though, we are talking about a minor (< 3%) difference. But stiil, that these do not perform better is really interesting. Hopefully, we can analyze this farther and come up with some conclusions as to why this is occuring and maybe even improve the performance more.

 

Volk Integration to GNU Radio

Getting Volk into GNU Radio

We've been talking about integrating Volk into GNU Radio for what seems like forever. So what took us so damn long? Well, it's coming, very shortly, and I wanted to take a moment to discuss both the issues of Volk in GNU Radio and how to make use of it with some brand-new additions.


The main problem with using Volk in GNU Radio is the alignment requirements of most SIMD systems. In many SIMD architectures, Intel most notably (and we'll stick with them in these examples as it's what I'm most familiar with), we have a byte-alignment requirement when loading data into the SIMD-specific registers. When moving data in and out, there is a concept of aligned and unaligned loads and stores. You take a hit when using the unaligned versions, though, and they are not desirable. In SSE code, we need to by 16-byte aligned while the newer AVX architecture wants a 32-byte alignment.


But we have the dynamic scheduler in GNU Radio that moves data between blocks in chunks of items (where an item is whatever you want: floats, complex floats, samples, etc.). The scheduler tries to maximize system throughput by moving as large a chunk as possible to give the work function lots of data to crunch at once. Larger chunks minimize the overhead of going into the scheduler to get more data. But because we are never sure how much data any one block has ready for the next in the chain of GNU Radio blocks, we cannot always guarantee the number of items available, and so we cannot guarantee a specific byte alignment of our data streams.

We have one thing going for us, though: all buffers are page-aligned at the start. This is great since a page alignment is good enough for any current or foreseeable SIMD alignment requirement (16 or 32 bytes right now, and when we get to the problem of requiring more than 4k alignments, I'll be happy enough to readdress the problem then). So the first call to work on a buffer is always aligned. 



But what if the work function is called with a number of items that breaks the alignment? What are we supposed to do then?



The first attempt at a solution was to use the concept of setting a set_output_multiple value for the block. This call tells the scheduler that the block can only handle chunks of data that contain a number of items that is a multiple of this value. So if we have floats in SSE chips, we need a multiple of 4 floats per call to the work function. It will never be called with less than 4 or some odd number that will ruin our alignment. 


But there's a problem with that approach. The scheduler doesn't really function well when given that restriction. Specifically, there are two issues. First, if the data stream being processed is finite and that number is not a multiple of what's required by the alignment, then the last number of items won't ever be processed. That's not the biggest deal in the world as GNU Radio is typically meant to stream data, but it could be a problem for many applications of processing data from a file.


The second problem, though, is latency. When processing packetized data, we cannot produce a packet until we have enough samples to make the packet. But at some point, we have the last few samples sitting in the buffer waiting to be processed. Because of our output multiple restriction, we leave those sitting behind until more samples are available so that the scheduler can pass them on. That would mean a fairly large amount of added latency to handle a packet, and that's unacceptable.


No, what we need is a solution that keeps the data flowing as best as it can while still working towards keeping the buffers aligned.


Branch Location

This post discusses issues that will hopefully be merged into the main source code of GNU Radio soon. However, I would like it to undergo significant testing, first, and so have only published a branch at:

http:github.com/trondeau/gnuradio.git

as the branch safe_align.

Scheduler Alignment

Instead of using the set_output_multiple approach, we need a solution that meets the following goals:

  • Minimize effect to latency; maximize throughput.
  • Try to maintain alignment of buffers whenever possible.
  • When not possible to keep alignment, pass on data quickly.
    • minimize latency accrued by holding data.
  • Re-establish alignment but not at the expense of extra calls.
    • pass on the largest buffer possible that re-establishes alignment.
    • don't pass the minimum required. The extra overhead of calling a purposefully-truncated work function is greater than the benefit of realigning quickly.

In it's implementation, we want to minimize any added computation to the scheduler and slow down our code.


In the approach that we came up with, the scheduler looks at the number of items it has available for the block. If there are enough items to keep the buffers aligned, it passes on the largest number of samples possible that maintains the alignment. If there aren't enough, then it sends them along anyway, but it sets a flag that tells the block of the alignment problem.


When the buffers are misaligned, the scheduler must try to correct the alignment. There are two ways of doing this. The easiest way is just to pass on the minimum number of items possible that re-establishes alignment. The problem with this approach is that the number is really small, so you are asking the work function to handle 1, 2, or 3 items, say. Then it has to go back to the scheduler and ask for more. This kind of behavior incurs a tremendous amount of overhead in that it deals more with moving the data than processing it.


The second way of handling the unalignment is to take the amount of data currently available and pass on the largest possible chunk of items that will re-establish the alignment. This forces us to handle another call to work with unaligned data, but the penalty for doing that is much less than the overhead of purposefully handling small buffers. In my experiments and analysis, most of the data comes across aligned, anyway, so these calls are minimal.


To accomplish these new rules, the GNU Radio gr_block class (which is a parent class to all blocks) has these new functions:

  void set_alignment (int multiple);

  int  alignment () const { return d_output_multiple; }

  void set_unaligned (int na);

  int unaligned () const { return d_unaligned; }

  void set_is_unaligned (bool u);

  bool is_unaligned () const { return d_is_unaligned; }


A block's work function can check it's alignment and make the proper decision on what to do based on that information. The block can test the is_unaligned() and call. If it indicates that the buffers are aligned, than the aligned Volk kernel can be called. Otherwise, it can either process the data directly or call an unaligned kernel.


In order not to make this blog post longer than it already is, I will post a separate blog post discussing the method and results of benchmarking all of this work. In it, just to tease, I'll be showing a few surprising results. First, I'll show that the use of Volk can give us dramatic improvements for a lot of simple blocks (ok, that's not surprising). Second, on the tested processors, I see almost no penalty for making unaligned loads and stores. And third, lest you think that last claim makes all of this work unnecessary, my test show that the efforts to keep the alignment going in the new scheduler actually improves the processing speed even without using Volk. So there is a two-fold benefit to this work: one from the scheduler itself and then a second effect of Volk. 

Making Unaligned Kernels

Because we will be processing unaligned buffers in this approach, we need to either handle these cases with generic implementations or use an unaligned kernel. The generic version of the code would be like what is already in a block now that we would like to transition to using Volk. This would be the standard C/C++ for-loop math.


A useful approach, though, is to make use of unaligned Volk kernel. Even though an unaligned load is a bit more costly than an aligned call, we try to maximize the size of the buffers to process and the overall affect is still faster than a generic for loop. So it behooves us to call the unaligned version in these cases, which might mean making a new kernel specifically for this. 


Luckily, in most cases, the only difference between an aligned Volk kernel and an unaligned one is the use of loadu instead of load and storeu instead of store. These two simple differences makes it really easy to create an unaligned kernel.


With this approach, a GNU Radio block can look really simple. Let's use the gr_multiply_cc block as an example. Here's the old version of the call:

int
gr_multiply_cc::work (int noutput_items,
  gr_vector_const_void_star &input_items,
  gr_vector_void_star &output_items)
{
  gr_complex *optr = (gr_complex *) output_items[0];
  int ninputs = input_items.size ();
  for (size_t i = 0; i < noutput_items*d_vlen; i++){
    gr_complex acc = ((gr_complex *) input_items[0])[i];
    for (int j = 1; j < ninputs; j++)
      acc *= ((gr_complex *) input_items[j])[i];
    *optr++ = (gr_complex) acc;
  }
  return noutput_items;
}


That version uses a for-loop over both he number of inputs and number of items. Here's what it looks like when we call Volk.

int
gr_multiply_cc::work (int noutput_items,
     gr_vector_const_void_star &input_items,
     gr_vector_void_star &output_items)
{
  gr_complex *out = (gr_complex *) output_items[0];
  int noi = d_vlen*noutput_items;
  memcpy(out, input_items[0], noi*sizeof(gr_complex));
  if(is_unaligned()) {
    for(size_t i = 1; i < input_items.size(); i++)
      volk_32fc_x2_multiply_32fc_u(out, out, (gr_complex*)input_items[i], noi);
  }
  else {
    for(size_t i = 1; i < input_items.size(); i++)
      volk_32fc_x2_multiply_32fc_a(out, out, (gr_complex*)input_items[i], noi);
  }

  return noutput_items;
}


Here, we only loop over each input, but the calls themselves are to the Volk multiply complex kernel. We test the unaligned flag first. If the buffers are flagged as unaligned, we use the volk_32fc_x2_multiply_32fc_u kernel where the final "u" indicates that this is an unaligned kernel. So for each input stream, we process the data this way. In particular, this kernel only takes in two streams at once to multiply together, so we take the output and multiply it by the next input stream after having first pre-loaded the output buffer with the first input stream.


Now, if the block's buffers are aligned, the flag will indicate as much and the aligned version of the kernel is called. Notice that the only difference between the kernels is the "a" at the end instead of the "u" to indicate that this is an aligned kernel.


If we didn't have an unaligned kernel available, we could either create one or just call the old version of the gr_multiply_cc's work function in this case.

Blocks Converted so Far

These next few sections are starting to get really low-level and specific, so feel free to stop reading unless you are really interested in the development work. I include this as much for the historical reference as anything.


Most of these blocks that I have so far moved over to using Volk fall into the category of the "low-hanging fruit." That means that, mostly, the Volk kernels existed or were easy to create from existing Volk kernels (such as making unaligned versions of them), that the block only needed a single Volk kernel to perform the activity required, and that had very straight-forward input to output relationships.


On occasion, I went and added a few things that I thought were useful. The char->short and short->char type conversions did not exist, but they were already a Volk kernel, so making them a GNU Radio block was easy and, hopefully, useful.


I also added a gr_multiply_conjugate_cc block. This one made a lot of sense to me. First, it was really easy to add the two lines it took to convert the Volk kernel that did a complex multiply into the conjugate and multiply kernel that's there now. Since this is such an often-used function in DSP, it just seemed to make sense to have a block that did it. My benchmarking shows a notable improvement in speed by combining this operation into a single block, too. Just to note, this block takes in two (and only two) inputs where the second stream is the one that gets conjugated.


What follows is a list of blocks o different types convered to using Volk


Type conversion blocks

  • gnuradio-core/src/lib/general/gr_char_to_float
  • gnuradio-core/src/lib/general/gr_char_to_short
  • gnuradio-core/src/lib/general/gr_complex_to_xxx
  • gnuradio-core/src/lib/general/gr_float_to_char
  • gnuradio-core/src/lib/general/gr_float_to_int
  • gnuradio-core/src/lib/general/gr_float_to_short
  • gnuradio-core/src/lib/general/gr_int_to_float
  • gnuradio-core/src/lib/general/gr_short_to_char
  • gnuradio-core/src/lib/general/gr_short_to_float


Filtering blocks

  • gnuradio-core/src/lib/filter/gr_fft_filter_ccc
  • gnuradio-core/src/lib/filter/gr_fft_filter_fff
  • gnuradio-core/src/lib/filter/gri_fft_filter_ccc_generic
  • gnuradio-core/src/lib/filter/gri_fft_filter_fff_generic


General math blocks

  • gnuradio-core/src/lib/general/gr_add_ff
  • gnuradio-core/src/lib/general/gr_conjugate_cc
  • gnuradio-core/src/lib/general/gr_multiply_cc
  • gnuradio-core/src/lib/general/gr_multiply_conjugate_cc
  • gnuradio-core/src/lib/general/gr_multiply_const_cc
  • gnuradio-core/src/lib/general/gr_multiply_conjugate_cc
  • gnuradio-core/src/lib/general/gr_multiply_const_cc
  • gnuradio-core/src/lib/general/gr_multiply_const_ff
  • gnuradio-core/src/lib/general/gr_multiply_ff

Gengen to General

One thing that might confuse people who have previously developed in the guts of GNU Radio is how I moved some of the blocks from gengen to general. Many GNU Radio blocks perform some function, like basic mathematical operations on two or more streams, that behave identically from a code standpoint but which use different data types. These have been put into the gengen directory as templated files where a Python script is used to autogenerate the type-specific class. This was before Swig would properly handle actual C++ templates, so we were left doing it this way.


Well, with Volk, we don't really have the option to template classessince the Volk call is highly specific to the data type used. So when moving certain math block for a specific type out of gengen, we went with the simple solution of removing that data type from the autogeneration scripts and placing it into general as a stand-alone block that can call the right Volk function. Good examples are the gr_multiply_cc and gr_multiply_ff blocks to see what I mean.


This really seems like the simplest possible answer to the problem. It maintains our block structure that we've been using for almost a decade now and keeps things clean and simple for both developers and users. The downside is some duplication of code, but with the Volk C functions, that is somewhat inevitable and not a huge issue to deal with.

"GNU Radio is Crap" and Other Such Insights

There seem to be two kinds of people that I meet when talking about GNU Radio. Those who loves it and those who hate it. I rarely get anyone either in between or who just want to know more about it. I am writing this to explore why this happens and why some people think GNU Radio is, as is often put, crap. 

First, let me take a step back. I was recently at the Software Radio Implementation Forum (SRIF) in Hong Kong to give a talk on GNU Radio. There were some enthusiastic people in the audience who I talked to afterwards about the project. Also there was a talk on Microsoft's Sora project, another software radio system. The presentation was very enlightening, especially in regards to identifying the differences between GNU Radio and Sora. In large part, we have come at things from two completely different philosophies, which I think plays into the main thread of this article.

GNU Radio has always been about enabling the development and experimentation of communications. It was designed around the flow graph and the block structure to piece radios together. We wanted developers and designers. As such, we spent little time working on grand applications and demonstrations of the project, save maybe for the original ATSC decoder. We have lots of examples, but most of them are about how to use GNU Radio, not about showing what GNU Radio can do. There's a subtle but important difference between the two.

Sora, on the other hand, came at things with the intent of building and showing off complex applications in software radio, ostensibly to prove that GPPs could handle complex waveforms. Their initial project enabled them to do 802.11 in real-time. They then moved to LTE. Their use of memory and look-up tables to replace computations and their initial focus on using SIMD programming for speed have made a very fast, solidly performing SDR system. But it is really only now that they are building Sora into a project for developers. At this SRIF talk, I heard about them building "bricks," which we would call "blocks," that will fit together and create a radio. 

Now, this is, obviously, my take on the history of the project from the presentations and papers that I have seen and read. 

So the way that I see it is that the projects came from different directions. We started with the development and programming model, but lack good applications to showcase our product. They have great apps, but are just now getting things together as a developers platform. This represents two different mindsets that I see in the computer and programming community as a whole. There are those who want to develop and those who want to use. In a very basic and non-nuanced sense, these attitudes are the distinction between Windows users and Linux users. Linux users don't mind getting their hands dirty and working a bit harder for the freedoms and power Linux gives them. Windows users are more interested in using computers. (I confess that such as simplified comparison makes me feel like a bad stand-up comic, but I hope the point is made without belaboring the subject).

From this perspective, those people who think GNU Radio is crap, when I get a chance to talk to them about their problems, tend to be from the application side of things. They try to use one of our examples out of the box and treat it as though it is meant to be a full-on application. Except that they are examples of how to do things. We have not yet built a real digital communications application as part of GNU Radio. Our benchmark scripts are meant for benchmarking and exploring how to do digital communications. They were never meant to be used as part of a deployed communications platform. If you look at it, we don't even use equalizers or channel coding, so they are fragile and hard to use. But we've never claimed any differently.

Still, our examples are there for the world to see, and I can't be surprised when people mistake the intentions. And so I similarly cannot be upset when I get the reactions from people who wanted to use GNU Radio to make an quick application based on OFDM or MIMO. We simply haven't provided them with the means to do so.

On the other hand, when someone comes to the project with a developers mindset and wants to do something complicated, they can spend the time to work with the code and see what kind of capabilities are offered. We have some great developers who have done some amazing things with GNU Radio, and so we know that it's not crap. But it's not shiny, either.

To conclude, I'm left with the difficult problem of trying to think if there is a way to solve this problem. There is definitely no single solution or magic bullet. I'd love to see evne more complete applications get published on CGRAN or sites like Github. And for some of those that are already published, some amount of care needs to be taken to ensure some quality control, by which I mean good documentation and a user interface to the products as well as easy maintanence as GNU Radio versions evolve. I'd also love more feedback and additions to our examples in GNU Radio, even to the extent that we can get more of what we would call applications (there's a reason we have an examples and apps directory in our new tree

structure). These ideas involve the buy-in of the community and a slight change in our ideas and understanding of how to construct and present applications to the world.

SNR Estimators

In GNU Radio, we have been slowly evolving our digital communications capabilities. One thing that becomes quickly and painfully obvious to anyone doing real over-the-air communications with digital modulations is that the modulators and demodulators are the easy part. It's the synchronization that's the hard part and where most of your work as a designer goes.

Generally speaking, synchronization means three (maybe four) things: frequency, timing, and phase. The fourth is automatic gain control (AGC). While AGC isn't really "synchronization," it follows similar principles of an adaptive loop. Different modulation schemes have different methods for AGC and synchronization. These tend to fall into categories of narrowband, wideband (and then ultrawideband), and OFDM, but we can easily dispense with these categories and go in depth into differences within them, too. For instance, narrowband PSK and FSK systems have pretty widely different requirements out of a receiver for demodulation and the appropriate synchronization algorithms reflect this.

But this post is about SNR estimation. The reason to talk about synchronization here it twofold. First, like synchronizers, SNR estimation techniques can vary widely depending on the modulation scheme being used. Second, most synchronization schemes like to be reactive to changes in SNR. You can often find different algorithms that work well in low SNR cases but not high, or they are too costly to perform in high SNR where the signal quality allows you to use something simpler and/or more accurate. Take for example an equalizer. The constant modulus equalizer is great for PSK signals, but we know that an LMS decision-directed equalizer works better. But a decision-directed equalizer only works in cases where the SNR is large enough that the majority of samples are correct. So we often start with a blind CMA for acquisition purposes and then move to a decision-directed approach once we've properly locked on to the signal.

We've been meaning to add SNR estimators to GNU Radio for some time, now, and I finally developed enough of an itch to start doing it. But as I said, each modulation could use a different equalizer, and there are various equalizers designed for this modulation in that channel or that modulation in such and such channel. If you have access to IEEExplore, a simple search for "snr estimator" produces 1,279 results. Now, I know that's nothing to a Google search, but twelve hundred scholarly (we hope) articles on a topic is a lot to get through, and you will quickly see that there are finely-tuned estimators for different modulations and channels.

What I did was give us a start into this area. I took a handful of the most generic, computationally realistic estimators that I could find for PSK signals and implemented them. I'll give a shout out here to Normon Beaulieu, who has written a lot in this field. I've found that a lot of his work in SNR estimators to be accessible and useful, and he presents his work in ways that can be easily translated into actual, working code.

I also took the tack of looking for estimators that worked in AWGN channels. Now, if you've ever heard me speak on the subject of communications education or research, you've probably heard me scoff at anyone who develops a system under simulated AWGN conditions. In the case of an SNR estimator, though, I thought about this and had to come to the conclusion that the only way to handle this is to have an estimator that you can plug in variables for your channel model, which of course assumes that you have or can estimate these parameters. So in the end, I followed Beaulieu's lead in at least one of this papers and took algorithms that could be both simplified and tested by assuming AWGN conditions. I did, however, provide one of these algorithms (what I refer to as the M2M4 algorithm) in a way that allows a user to specify parameters to better fit a non-AWGN channel and non-PSK signals. Using the AWGN-based algorithms with this other version of the M2M4 seemed like a good compromise for being computable without more information but at least providing a tip of my hat to the issue of non-AWGN channels. If nothing else, these estimators should give us a ballpark estimate.

I also specifically developed these estimators based on a parent class that would easily allow us to add more estimators as they are developed. Right now, the parent class is specifically for MPSK, but we can add other estimator parent classes for other modulations; maybe have them all inherit from a single class in the end -- this is fine, since the inheritance would really be hidden from the end user. The class itself is in the gr-digital component and called digital_impl_mpsk_snr_est. It's constructor is simply:

digital_impl_mpsk_snr_est(double alpha);

Where the parameter alpha is the value used in a running average as every estimator I've seen is based on expected values of a time series, which we estimate with the running average. This value defaults to 0.001 and should be kept small.

I have created four estimators that inherit from this block. These are named the "simple," "skew," "M2M4," and "SVR" estimators. The last two come from [1]. The "skew" estimator uses a skewness measure that was developed in conversation with fred harris. The simple estimator is probably written up and documented somewhere, but it's the typical measurement based on the mean and variance of the constellation cloud.  I've tried to document these as best as possible in the header files themselves, so I'll refer you to the Doxygen documentation for details (note that as of the writing of this blog post, these estimators are only in the Git repository but will be available starting in the 3.5.1 release). The "SNR estimators"  group in the Doxygen manual can be used to find all of the available estimators and details about how to use them.

In particular, the M2M4 and SVR methods were developed around fading channels and both use the kurtosis of the modulation signal (k_a) and kurtosis of the channel (k_w) in their calculations. This is great if these values are know or can be estimated. In the case of the M2M4 algorithm, I provide a version of it, called the digital_impl_snr_est_m2m4, as an example of a non-PSK and non-AWGN method; right now, this block is unavailable through any actual GNU Radio block. It's untested and unverified, but I wanted it there for reference and hopefully to use later.

 

SNR Use in Demodulation

The main intent of having an SNR estimator block is to enable the use of SNR information by other blocks. As such, there are two GNU Radio blocks that are defined for doing this in different ways. First off, let's say that the SNR estimation is done after timing recovery (see harris' paper "Let’s Assume the System is Synchronized", which is unfortunately fairly costly but worth it if you can get a copy). So in GNU Radio terms, this means that the SNR is estimated down stream of most of the synchronization blocks. While I would like to just pass a tag along with the SNR information, that does won't work for every block, like the frequency recovery, AGC, and timing recovery loops that come before. Instead, we will have to pass them a message with this information. However, some blocks exist downstream that want this info, too, like the channel equalizer.

To accommodate both possible uses, I created two blocks. The digital_mpsk_snr_est_cc block is a flowgraph with a single input and single output port, so it's meant to go inline in a flowgraph. This block produces tags with the SNR every N samples, where N is set by the user (the second arg in the constructor and through the set_tag_nsamples(int N) function). The downstream blocks can then look for the tag with the key "snr" and pull out this information when they need it. The value of N defaults to 10,000 as an arbitrary, fairly large number. You'll want to set this depending on the speed you expect the SNR conditions to change.

The second block is a probe, which is GR speak for a sink. It's called digital_probe_mpsk_snr_est_c and only takes a single complex input stream. Right now, it just acts as a sink where the application running the flow graph can query the SNR by calling the "snr()" function on this block (the same is true for the digital_mpsk_snr_est_cc block, too). However, this block uses a similar constructor in that you set a value N for the number of samples between messages. In this case, instead of sending a tag down stream, it will send a message every N samples. The problem with this is that our message passing system isn't really advanced or easy enough to use to set this up properly. Recent work by Josh Blum might fix this, though.

Eventually, though, we hope to be able to create flow graphs where the SNR estimation is passed around to other blocks to allow them to adjust their behavior. In the case I'm interested in right now, I'd like to pass this info to the frequency lock loop to stop it if the SNR falls below a certain level so that it doesn't walk away when there is no signal to acquire.

 

[1]  D. R. Pauluzzi and N. C. Beaulieu, "A comparison of SNR estimation techniques for the AWGN channel," IEEE Trans. Communications, Vol. 48, No. 10, pp. 1681-1691, 2000.

 

More on Using Git

There are still lots of questions on properly using git for doing development work with GNU Radio. Jason Abele from Ettus Research sent me this link, which is a good, general guide to using Git with gEDA. gEDA is also an open source project, so some of the lessons they teach here are relevant to our community, too. I've added a link to this on our GNU Radio project site's page on using Git (http://gnuradio.org/redmine/projects/gnuradio/wiki/DevelopingWithGit).

Control Loop Gain Values

I have recently been working on updating our digital modulation capabilities in GNU Radio. This has involved making a gr-digital namespace, moving all relevant digital modulation work over, and going through and fixing up a number of the algorithms. One issue in particular that I have been working on is the concept of the loop gains in all of our control loops. You will find in any control loop we have, like in the clock recovery, Costas loop, constellation receiver, FLL and PLL blocks, that the control loop has two gains, alpha and beta. These are used in the following ways:

freq = freq + beta * error

phase = phase  + freq + alpha * error

When creating any of these blocks, we have to specify both alpha and beta. But what should these values be? What relationship do they have to each other, or, indeed, the physical properties of the control loop? Well, that's not easy to understand or explain, and because of this, it's not the right way to build these algorithms.

A better way is to convert these gains into a damping factor and loop bandwidth, which gives us a bit more intuition as to what's going on and makes them easier to set. So I have been switching over to using these concepts instead of alpha and beta. The loop equations are the same, and so we have to derive alpha and beta from these two new concepts. This is done in the following way where damp is the damping factor and bw is the loop bandwidth:

alpha = (4 * damp * bw) / (1 + 2 * damp * bw + bw * bw)

beta = (4 * bw * bw) / (1 + 2 * damp * bw + bw * bw)

So now we just have to know what damping factor and bandwidth we require from our loops. Of course, right now it seems like we've just translated from one set of unknowns to another set of unknowns. But at least we can better explain this new set. In fact, we're going to break it down so that we only specify one of those numbers.

In control systems, the damping factor specifies the oscillation of the loop, whether it's under, over, or critically damped. In a normalized system, a damping factor of 0.707 (or the sqrt(2)/2) is a critically damped system. This is a good approximation for most systems, so we can just set it and forget about it. 

So now, we just have to set a loop bandwidth. This is a bit more difficult, but as a rule, this value should be somewhere around (2pi/100) to (2pi/200), so we can work around there. If we're trying to optimize the behavior of our system, it's now a hell of a lot easier to work with a single value instead of two.

Just to be complete about the whole thing, though, as I have been reworking all of these blocks, I have also been adding set and get functions for every value. So while the constructor only needs us to give it the loop bandwidth, we can also change the damping factor if there's a particular need to do so. We can also individually change alpha and beta, still, if really want to.

More on DSP Tips and Tricks

After reading a few of the IEEE Signal Processing Magazine's series on "DSP Tips and Tricks" it struck me that these are very interesting, easily digestible, and useful tools that we could use in GNU Radio. If you have access to IEEE Xplore or the Signal Processing Magazine, search for "DSP Tips and Tricks" to get a list of all of the articles. They are all about 3 pages long and geared towards practical outcomes. Many of them focus on FPGA and fix-point DSP projects in particular, but they each teach some lessons that GNU Radio could potentially benefit from.

I would love to see a series of maybe junior or senior year ECE student projects to implement these (they probably aren't capstone-level projects, though). They'd be great hands-on projects with math and programming and you would be required to demonstrate it working with real signals.

Discussing the Development Cycle

There has been some talk recently about the GNU Radio project and it's development. I had made a request for people to help and contribute more, but one of our users, Martin Braun, made some excellent points. I wanted to record them here to make sure that I had them ready to be addressed. One thing that I hope to improve upon in the near future is our web page. We have a great resource in the gnuradio.org Wiki that is underutilized, and questions like Martin's could be answered by crafting good pages to describe the development process and methods for contributing.

".... let me suggest some other things to smoothen the 'community integration':

 

  • There could be another document with the definitive guide on how to contribute. Perhaps updating the previous link would be enough. Questions answered should include:
    • What kind of stuff is accepted into the core, what kind of stuff is
    • better maintained as a separate CGRAN project? (Examples, refer to the mailing lists as a place to discuss this...)
    • The mechanics/protocol of actually submitting
    • What happens after submitting?
  • Revive the bug tracker.
  • Explain who's who in GNU Radio (seriously, who's actually actively developing GR besides Tom? Are there areas of responsibility? Who may submit to the master?)
  • Create a list of suggestions of contributions ("You want to contribute? How about you write a foo-agulator for standard bar? How about writing the docs for block `grep -R 'FIX MY DOCS' src/lib/`?")

Thanks, Martin!

Paper on Compressive Sensing

I've developed a bit of an interest in compressive sensing. It's a fun topic that has some potential for solving various problems. I coauthored a paper at DySPAN about using compressive sensing for white space detection (that is, finding spectrum "holes"). While not really related to GNU Radio, it certainly fits into the areas of communications that we should be thinking about in SDR.

My specific point of this post was to recommend a paper by Davenport, et al., from Rice University. This team, under Richard Baraniuk, has done some great work in the practical aspects of using compressive sensing, and this particular paper addresses the benefits and drawbacks.

The Pros and Cons of Compressive Sensing for Wideband Signal Acquisition: Noise Folding vs. Dynamic Range

It provides a good background in what compressive sensing is and goes on to analyze and discuss what you can expect from a compressive sensing receiver.

DSP Tips and Tricks

I wanted to point out that Richard Lyons (some of you might have studied intro to communications with his book) is doing a series in the IEEE Signal Processing Magazine called "DSP Tips and Tricks." This is an area that I would hope that anyone in software radio would find interesting. It is much more about practical problems and solutions to our signal processing world than it is about research and theory (which I personally find interesting, but recognize others tend to have more practical interests than me).

The article from March's magazine is an interesting look at an old problem, but one that many people might not be familiar with (based on discussions I've had, mainly with students). He looks at a neat trick to reducing the scalloping loss error when attempting to determine a signal's power from an FFT plot.

Reference: Signal Processing Magazine, IEEE, Vol. 28, No. 2. (2011), pp. 112-116.

More on DySPAN 2011

I had a great time at DySPAN in Aachen last week, and there was lots of talk about the state not only of DySPAN but of its related technologies. There was a lot of attention paid to the state of DSA (dynamic spectrum access) devices and their theoretical/research aspects in general, but we also talked a lot about the role and state of cognitive radios and software radios. A number of follow up posts will, I'm sure, be dealing with these questions.

Also, my good friend and former boss Linda Doyle has written a number of great discussion pieces on her blog. So if you want to get a feel for some of the subtext and conversations happening during the conference, head there.

Part of the conversation at DySPAN took place over Twitter. Keith Nolan and I tried this during DySPAN 2008 in Chicago, including a live feed on a monitor in the demo room, but at the time, it was a bit premature for the attendees to really get in to. This year, though, we had some great actvity over Twitter, even though I was more silent than usual. I think the computer science and IT conference world have already caught on to the use of social networking like Twitter to enhance their conferences, but this was the first time that I had really felt like it was part of the overall tenor of this event.

The Friday morning panel sessions really got us going, probably due to the energy the panels and moderators brought to the day. We as the audience responded to them by actively participating in the backchannel on Twitter. It not only served as an incubation room for developing some ideas, but it will also be there as a record of what was happening and (at least a part of) the audience's reactions.

DySPAN 2011

DySPAN 2011 kicked off this morning in Aachen, Germany. So far, it's been enjoyable with three really good keynotes to open it up, and the rest of the program looks like it's going to be good.

This week's activities are sure to keeep me very busy and away from any significant code work, unfortunately. But I'm excited to see many of the demonstrators once again using USRPs and GNU Radio.

SDR Technical Conference 2010

The Wireless Innovation Forum's SDR Technical Conference this year was a blast. I was able to catch up with a lot of old friends, meet new ones, and learn about a lot of new things. I was also involved in a number of sessions on GNU Radio, and each was incredibly well attended. The community interest in our work was pretty astounding.

The first session I was involved with was on Open Source Software in Military and Commercial Wireless. This was put together by Philip Balister and John Scott, and we had far more people interested than I had anticipated. We all cut our talks short a bit to make enough time to have a Q&A session with the audience, which went very well. There were many questions from the audience, and I felt like we had a good dialog about some big issues in OSS.

Even more astounding was the turnout on Wednesday night. After a long day of conferencing and the exhibits session with food and drink, we still had an almost packed room for our GNU Radio Meetup. We learned about some of the projects you guys are interested in, and went over a few of the new things we're introducing into GNU Radio as well as new stuff coming from Ettus Research. I had a lot of fun with this, and I also really enjoyed getting to know quite a few of you better at the pub later that night.

My final GNU Radio session occurred Thursday morning when I gave my half-day tutorial. Again, the attendance was much bigger than expected, and I hope everyone learned a few things from it.

I just published the presentations from these events on gnuradio.org where you can find them here.

I'm looking forward to next year!

VOLK: Vector-Optimized Library of Kernels

We have just pushed a new library into the GNU Radio tree. It's called VOLK (Vector-Optimized Library of Kernels) and it is designed to help us work with the processor's SIMD instruction sets. These are very powerful vector operations that can give signal processing a huge boost in performance. We have done hand-optimization for the FIR filters in the past, and you can bet that FFTW uses SIMD heavily for its performance. Yet we never had a convenient way to really use SIMD in every-day signal processing in GNU Radio.

Volk helps us address this issue. It's a framework to add SIMD functionality as we need it. It consists of a set of functions, say a vector multiplier for complex floats. We want to make use of this, say in the FFT filters to replace the inner multiply loop. But to use SIMD code, we need a way to be processor independent but that also easily integrates with our code. So now, you have volk. Where we want to multiply two vectors, you can call:

volk_32fc_multiply_aligned16(c, a, b, N)

Where a and b are the two vectors we want to multiply, c is the output of the multiply, and N is the number of items to multiply. The key issue here is that a, b, and c must be 16-byte aligned. I'll go over that more in a later post.

Behind the scenes, volk knows that your processor can handle some set of SIMD instructions. If you run an Intel processor, it can do MMX, SSE, SSE, SSE3 and maybe SSE4. Volk then has a list of routines that do complex multiplies. It will always have a generic routine, which is a a standard C for loop that will run on any computer. But it will also have other multiply routines that are designed for different SIMD instructions. Without you knowing or caring about the SIMD architecture or how to write it, volk selects the best version of the multiplier for your processor, like SSE3.

I did this on my system and achieved a 10% boost in speed in the FFT filters. Not bad considering that the multiply is not the biggest part of the FFT routine and the fact that it took me about a half-dozen lines of code to do, including the headers and other setup necessary.

That's the basic introduction to Volk for now. I'll post more about how to use it later. For now, I just wanted to alert everyone that it's available and will be built with GNU Radio's "next" branch. Also, volk is built as its own autotools project, which means it has its own configure and bootstrap. These are called automatically by GNU Radio's configure, so you don't have to do anything. You'll see volk's configure being run by itself during GNU Radio's configure. The key about this is that you can just take the volk directory as a separate project from GNU Radio and configure, build, and produce libvolk tarballs that you can then use in your projects.

Much of the early work on volk is public domain code, so if you don't see a GPL notice and copyright header in a file, it's public domain. The rest of it is GPLv3 and copyright FSF like the rest of GNU Radio. And any changes we make from the GNU Radio side of things will also be GPL'd. Just so you have some understanding of the state of things when using it in your own code.

 

New Interface for pfb_arb_resampler_ccf

It seems like there are more and more reasons to use the arbitrary resampler, and it has had a rather difficult interface. Until now, as the title of this post might suggest.

In the previous interface, you had to specify your own filter taps to the resampler block, which had this interface:

    resamp = blks2.pfb_arb_resampler_ccf(rate, taps, flt_size=32)

The rate and taps were required arguments, where rate sets the resampling rate of the filter and taps is a vector of filter taps. You could also specify the number of filters to use in the filterbank if you really wanted to. It defaults to 32, which gives excellent performance with very little quantization error. Increasing this to 64 or 128 provides very little benefit here (and powers of 2 are not necessary; I just tend to think this way).

The problem is, you had to know how to create the taps for your resampler, and often, you are not interested in actually filtering the signal. You just want to change its rate. Even still, you had to design a set of filter taps, and this is not necessarily intuitive if you don't know how the resampler works. In fact, to get a lowpass filter of bandwidth B, you would do something like:

    taps = gr.firdes.low_pass(32, 32*fs, B, W)

Where W is the transition band of the filter. So why is the gain 32 and the sample rate 32*fs? Simple, because that's the rate that the filter runs at, of course. What, you didn't know that? That's what I mean about needed to know how the filter works. And actually, that 32 should really be flt_size if you are using something besides the default.

See, the filter really runs at an interpolating rate of the filter size (e.g., 32), so you have to design the filter prototype thinking this way. If you've never worked with these kinds of filters before at all, even this might not have made much sense to you.

So the new way you can use the resampler is to basically ignore that it's a filter at all. You can just give it a resampling rate and let it go at that. Inside, the block will design a filter that covers the bandwidth of the input signal, and that's all. You can still specify the number of filters to use with flt_size, and the out-of-band attenuation with atten. By default, flt_size=32 and atten=80. There's probably very few times you'll want to change these values. So the new interface is:

    resamp = blks2.pfb_arb_resampler_ccf(rate, taps=None, flt_size=32, atten=80)

You can play with the new interface by looking at gnuradio-examples/python/pfb/resampler.py in the git:master branch. This change should make it into the next version release (3.3.1) of GNU Radio.

As a note, the filter design is done in the Python world, which is why you have to use the blks2 wrapper version of the resampler and not gr.pfb_arb_resampler_ccf. Eventually, the filter design might make its way into the C++ code, but it's just so much easier to do this stuff in Python that I just put it there.

SDR Conference

At the upcoming WinnForum's SDR Conference, we will be holding a GNU Radio meetup as well as a few other GNU Radio related activities. Here's the rundown of the events. The conference goes from Nov. 30 to Dec. 3.

First and foremost is the meetup. It's sponsored by Ettus Research, LLC and will be hosted by Matt Ettus and myself. We will be starting with some light hors d'oevres, which will start at 7:30 PM, but the real gathering won't kick off until about 8 PM. The conference is being held in the Hyatt Regency Crystal City, but the exact location is TBD. As part of the conference, information on the location will be available there, and I'll update when I know more. I expect we'll take about an hour to talk and then move on to more exciting talk over beer at one of the local pubs, The Fox and Hound (labeled as Bailey's Pub on Google Maps).

We want to take this opportunity to get to know more about the people working with GNU Radio and about the project they are working on. So while Matt and I will be going over some of our own stuff and perspective on the project, this should be an open forum for anyone to discuss their work and ideas.

 There are two sessions that I will be speaking at. The first is on Tuesday afternoon at 3:15 PM on "Open Source in Military and Commercial Wireless Workshop." I'll be speaking on the panel and giving a brief presentation about some of the future projects I'm working on with GNU Radio.

The second session is a tutorial I will be giving on GNU Radio that takes place at 9:50 AM on Thursday. It's not a purely introductory tutorial as we've done in the past. Instead, I will be giving more insight into the programming and design nature of working with GNU Radio. It's designed to help people think more about GNU Radio as an SDR system and the difficulties and challenges associated with it.

I hope to see many of you there!

Introducing Hudson!

While we've all been working to make GNU Radio better, it seemed about time to have some kind of central management tool to bring it all together. That's why I want to introduce the use of Hudson as our continuous integration sever:

http://gnuradio.org/hudson

Hudson provides a nice webpage interface to project builds, status, and reports. Its very configurable with a host of plugins for different purposes, so we can perform different tests and provide increased feedback and reports about the status of GNU Radio. My intention with this tool is to improve the quality of the code built from a few different perspectives. I'll go over some of the main features here.

Testing Multiple Builds

We can set up Hudson to test multiple builds and then build them periodically. Currently, three different builds are defined for the maint, master, and next branches of the main GNU Radio repository. While Hudson allows us to test multiple Git branches in one build, I prefer the separate reporting that comes out of doing them separately.

The builds are currently offloaded to a remote machine (another nice feature of Hudson) that builds all three branches on an Ubuntu 10.04 system. This is where the first real benefit of a CI system like Hudson can bring to the project. While it only tests the builds on a single operating system, we can extend the build process to test on different platform by adding more builds. The only caveat is that we have machines to perform each of these builds consistently available. I'm working on that.

We will eventually want to support periodic builds of GNU Radio on a number of system types. For an early list of possible installs, I'm thinking the following are the most necessary:

 

  • Linux 64-bit (provided under Ubuntu)
  • Linux 32-bit (Ubuntu again, probably)
  • Fedora (various versions)
  • Apple OSX
  • Windows (cygwin or mingw or both)

 

I think that would cover a large number of issues and support the largest user base. 

See, every time a build is performed, I get an email saying that it either a) passed or b) failed with some error(s). I can then see what the errors are and work to fix them. This will help keep an eye on the various platforms and what our code changes do to them.

Unit Test Results

We have a lot of unit tests defined for GNU Radio, but most people probably don't pay much attention to them. They are designed to ensure a level of quality in the code. Indeed, they are called "QA" code for precisely that reason. They should test the basic capabilities of each block in as many ways as possible to take care of any corner cases or other problems to arise in the code. Any new block or capability that is added to GNU Radio should come with some QA code. I admit, that I've been very bad about this in the past myself. But I'm hoping that Hudson will help change this attitude by clearly exposing the results of the QA code.

If you click on any of the builds, you should see a graph titled "Test Result Trend." This shows the number of tests completed and failed during the build (or more exactly, the "make check") process. If you look at master, you'll see that there are 437 tests run and no failures at the time of this post. That's good, but we need to do better. Using this visual feedback, we can not only see how many tests are run, but also click on the graph to find out more information on the tests.

Seriously, click on the graph. You'll see a table of "All Tests" that are broken down into "(root)" and "__main__". So not the most helpful of titles, but what you are really seeing is that the "(root)" results come from the CppUnit tests run on the C++ and the "__main__" set of results come from the Python unit tests. Clicking on this later will bring up a larger table that allows you to see what tests were run here and the time it took to run each test.

Using this interface, we can find out what tests are being run and how they are performing. With this kind of knowledge about what's going on in GNU Radio's QA process, I hope it serves as a good reminder and incentive to continue to improve the tests and write new tests to keep things moving.

Parsing the Output

Another good way to keep track of the quality of the code is to understand the errors and warnings being produced during the build process. Now, it's not likely that errors in the code are going to last long enough to get to the Git repo, but you never know. So I have Hudson using a plugin that can parse the output of "make" and search for specific strings. Specifically, it looks for any "error" and "warning" text that's generated. It then provides a very convenient interface for looking at the results. Click on the "master" project and click on the "Parsed Console Output" link on the left.

It's showing me that, dear god, we have 112 warnings generated during the build of the master branch. Most of these being signed and unsigned comparisons. Not a huge deal, but stuff that's often overlooked in the development process. But now here they are, plain as day and ripe for fixing. With this, we can now work on fixing these little warnings. Nothing critical, but useful information.

Full Output

That last thing I want to touch upon in this post is that, aside from the nice ways Hudson represents information about each of the builds, you can also access the full console output for each build. If you click on the "Console Output" link on the left menu bar under a particular build, you can see everything it did (you might have to click on the "full log" button on the top to see everything).

Final Thoughts

There is probably a lot more to Hudson that I don't know about and haven't talked about here. Specifically, there are hundreds of plugins for the tool, and I'm not sure what most of them do. So if anyone knows what else Hudson can do to help, please let me know.

 

Using Git with Github for Developing

I just wanted to say a few words about how I've been developing for GNU Radio using Github as my primary place to work. I use github because it seems to make things easier to track, its stable, and provides a nice interface for me and anyone else to work with. It also provides the benefit of being very easy to set up and doesn't depend on any of us GNU Radio guys to be in the critical path. You are free to branch, work on github, and then do what you want with the branch.

Hopefully, at some point you'll have done something great and want to see it go back into GNU Radio. At that time, you can contact me and/or Johnathan Corgan and ask us to merge it into the main GNU Radio tree. We'll check it over, play with it, ask questions, maybe ask for some rewrites. If all looks well, we'll merge it with one of the main GNU Radio trees (i.e., maint, master, or next) and you're done!

This post includes the largely unannotated description of how to work with git and github to do what you need to get a branch in and out of GNU Radio. I won't spend time going into what's happening, why, and what else you might be able to do with git. If you want that, you can go read the git documents out there already. I've found gitready and the Pro Git Book to be good resources for these questions.

 

1. Clone the GNU Radio repository

From the gnuradio.org Redmine resource, you get access to the GNU Radio git source with:

    git clone git://gnuradio.org/gnuradio.git

That gives you access to the main branches. These are maint, master, and next.

 

2. Set up an account with Github

Just got to the website and follow the instructions to set up an account. It's really easy. For simplicity sake, we'll say you got an account called "watson," and you named your project "gnuradio." This gives you a git repository called:

    git@github.com:watson/gnuradio.git


3. Add your Github repository as a remote

You want to set up a remote access to this branch from your GNU Radio clone you've just made:

    git remote add watson git@github.com:watson/gnuradio.git

Notice that we've used the "git@github..." URL. This is secure, read/write accessible form of your repository, and it will be called "watson" now. When you type git remote, you should now two items, "origin" and "remote."  The "origin" is the GNU Radio repository you made when you did the clone.

You probably want to do a git fetch watson just to do it. Right now, though, your remote should be empty, so it won't actually do anything. That'll be an important command later, though.

 

4. Push to the new repo

You now want to populate your repository. I like to have a copy of the "master" branch on my remote repo. I don't know why this one exactly, but it gives me a good base to work with. It also makes sure you're doing things correctly at this point.

    git checkout master

    git push watson master

The first command just makes sure you are in the "master" branch. The second command actually sends the local branch to the remote repository you've given; in this case, "watson." If you look on Github, you should be able to do look at the "Switch Branches" drop-down box and see that you indeed have a "master" branch there.

I like to try to keep this guy up to date. So when you go and pull a new "master" down from GNU Radio, you should then issue the git push watson master command to update the remote.

 

5. Make a working branch

The important stuff comes when you actually want to create something with GNU Radio. You're going to make a branch to work in and keep it stored on Github. Before creating a branch, you need to decide which branch you are going to branch off of. As I've mentioned before, GNU Radio keeps 3 main branches going: maint, master, and next. I don't want to go too deeply into the differences here, and you can read up on Git to try to understand these branches better (we follow the normal Git workflow conventions here; see man gitworkflows). Basically, maint is bug fixes from the last version; master is the main development branch for stuff that adds functionality but doesn't make too many changes to the API or behavior; next is where all of the crazy stuff happens (you'll find major changes like UHD taking place in next). So think a bit about what you are attempting to do and decide which branch best fits your needs. You can always branch off someone elses branch if they've been doing something you require.

Ok, you've got a branch picked out. Let's just say you're working from "master" to make things easy. Start by making sure you are in master, then create a new branch and check it out. 

    git checkout master

    git branch newwork

    git checkout newwork

Now make some changes. If you're not overly familiar with Git, it's not too hard. If you are familiar with CVS of Subversion, it might actually be a bit harder to reorient yourself to how it works. Mainly at this point, we care that Git is a distributed management system. What that means for us now is that you can work on your own machine as much as you like and never actually affect anything on the server until you want to. So when we do a "commit," we're only working locally. It's only when we actually "push" code that the server gets update. I tell you, this is a great feature when working on a plane or train and you aren't connected to the Internet. You can keep small, self-contained commits going and then push them when the time is right. You can also do other Git magic, but I'll let you discover that for yourself.

To commit, use the git commit command:

    git commit [options]

Look into Git to see what I mean by options. If you want to commit everything all at once, you can just use:

    git commit -a

But there are other ways of going about it.

So you've done your work and want to push it to Github. You do this because maybe you just want to keep your work stored someplace safe. Or maybe you're going to share it with someone else, or, like me, you work on a number of different machines and use it to keep your development branches synced. Whatever, I don't care. You just know that you are ready to push. It's easy:

    git push watson newwork

Bang! Done. You don't even have to name it "newwork," use whatever you want.

You can then commit, push, pull, and all that other fun stuff. I'll let you play from here.

 

6. Merge it into GNU Radio

You probably won't be doing this step. We have some quality control for what ends up on the GNU Radio git branches. There are a handful of people who are able to push directly to these branches, though. If you are really active with the project and really want to participate that closely, talk to us. But keep in mind that we're acting as gatekeepers here to ensure a level of quality and stability to the code base. We like to make sure the code doesn't affect the build, introduce large bugs or major problems (we all introduce bugs, this is to try and minimize it), doesn't duplicate efforts, and is a good fit to be included in the code base. Again, at this point, we will work together to get your code in and placed correctly.

But let's say you can talk directly to the GNU Radio git server. What would you do? Or in most cases, what do we do with your branch? It's a merge and push, really. You've finished "newwork" and want it moved into "master," so you send me an email explaining this:

"Hi Tom,

Great work on the project! You're really amazing! [ok, you don't need to include that part].

I've just completed "newwork" that does "what newwork does." I branched it off "master" and you can find it here:

git://github.com/watson/gnuradio.git

Please merge this with "master."

Thanks!

Me"

I'll set up to track your remote branch, check out the logs, see what you've done, and then make the decision to either merge it or maybe ask you to make some corrections or edits. But let's just say that it's perfect and ready to be merged. Here are the basic steps, as long as nothing weird happened:

    git remote add watson git://github.com/watson/gnuradio.git

    git fetch watson

    git branch --track newwork watson/newwork

        [maybe do some work myself here]

    git checkout master

    git merge newwork

    git push origin master

 We're done. The "master" branch contains your changes. You can now get rid of your branch if you want:

    git branch -d newwork

    git push watson :newwork

I included that last bit because I find it to be one of Git's most confusing syntaxes. Seriously? A colon is a remote delete? We couldn't have done something more expressive? Well, whatever. That's what it is.

 

That's it!

So now go and have fun with GNU Radio! And if you want to track my work, I keep a repo on Github at:

git://github.com/trondeau/gnuradio.git