Skip to content
Published on

Interview with David Baker, RF Test Engineer at Broadcom Ltd.

Categorized
Interviews

Best Practices for Migration from Legacy Systems to the OpenTAP Platform

David Baker, RF Test Engineer at Broadcom

OpenTAP platform users have many and varied migration stories to share, at both high and low levels, from architectures to tools to APIs to data structures and common porting chores.  Here, David Baker, RF Test Engineer at Broadcom, shares his migration tale with Bill Weinberg, OpenTAP community facilitator.

Bill Weinberg: How did you first encounter the OpenTAP platform and what inspired your migration?

David Baker: In 2016 our RF testing team at Broadcom concluded that our characterization test software had critical scalability limitations and sought an overhaul. We tried, but ultimately didn’t have the available software development resources in-house. Then in May of 2017, the Keysight team visited to pitch TAP (the original source for OpenTAP). We hadn’t considered an external solution until that moment, but it seemed like it could address our problems.

Bill: Can you tell us about your legacy test system?

David: Prior to migrating to the original Keysight test automation platform (TAP), our test code was also in C#. We had several hundred thousand lines of code, across three different project libraries. The first issue to address was the tight coupling across all of our code. Lots of static classes and variables, not much modularity. All the instrument classes, the settings classes, and the GUI were interdependent on each other. The solution was monolithic and required a large quantity of hardware and file configuration in order to function at all.  It would not scale up nor down very far from its original form.

Photo by Richard Lee on Unsplash

Bill: And what about the migration process?

David: We spent several months refactoring, and eventually our three project libraries separated into about 20 smaller modular libraries. For example, there’s one library for signal generators, one for power sensors, etc. This exercise was a valuable exercise regardless of platform.

Next, we started developing the Keysight side. Generally speaking, for each of 20 libraries, we developed an additional OpenTAP library to wrap it, doubling the number of libraries to about 40. Today we have about 20 OpenTAP libraries, wrapping 20 legacy non-OpenTAP libraries. Most of the heavy test functionality/IP resides in the non-OpenTAP libraries, while the OpenTAP libraries mostly just implement the OpenTAP interface.

Bill: What were the most palpable benefits of migrating to the new architecture?

Photo by Mindaugas Petrutis on Unsplash

David: One of several benefits of our migration has been the scalability. No longer a monolithic solution, we can support much bigger, and also substantially smaller applications and configs than before. This scalability today allows OpenTAP to be useful in more scenarios than we first anticipated.

Bill: What challenges did you encounter during the migration and how did you overcome them?

David: The main technical challenge was refactoring our spaghetti code, or more like an entire Olive Garden. But each incremental change was beneficial, which kept the ball rolling.

The next and ongoing challenge has been gaining user adoption. Our OpenTAP adoption rate has varied quite a bit across the wide variety of users, applications, and priorities within our labs. Some of our users saw the potential of OpenTAP and got involved early on, helping to shape our solutions. Several others have adopted OpenTAP out of necessity, typically after facing a new test requirement that is not supported by our legacy test software.

Overall, OpenTAP adoption is on a good trajectory, but we still have work to do. A key issue is the need for solid, available training resources. The Keysight training course is good, but of course doesn’t cover our internal plugins and usage model.

Bill: Your story above is replete with “lessons learned” – any others to call out?

David: Sure. A big question we had when we started was “how much functionality to pack into each test step?” Answering this question gave rise to several key philosophies:

Test Steps should be designed to be as lightweight as practical

For example, let’s say you need to send a digital signal, then measure a DC current. Those should be two separate steps, not combined into one step. This results in greater flexibility.

My rule of thumb is this: make each test step as small as it can get, without it needing to pass data awkwardly to another step. Sometimes steps will need to be rather large in order to satisfy that rule, but most often they can be pretty lightweight. Which brings me to my second point:

Austrian National Library on Unsplash

Test steps shouldn’t secretly pass data to each other

It’s tempting to have steps share data with each behind the scenes, using static variables. But this kind of ad hoc data communication falls outside the general paradigm of the OpenTAP platform, resulting in less flexibility, and strange dependencies between different steps. With a little more thought, I typically find that a better solution exists.

Bill: David, thank you for the wonderful insights – a memorable migration story!  We’ll check back with you from time to time to see how your test automation projects are advancing with the OpenTAP platform.