Archive for the ‘labview_communications’ Category

Being in product marketing for much of my career, I’m often insulted by the software tools I run across that are all style and no substance. They look good, but they don’t do a whole lot. As a seasoned product manager as well, I’m just as often disappointed with the products that are all substance with zero style. This category of products is functionally powerful yet embarrassing from a usability standpoint.

 

The most rewarding experiences of course are when we get it right: powerful products with intuitive workflows—style AND substance. This simple-to-understand, but difficult-to-execute scenario is exactly what I challenge my product managers to specify and my product marketers to demand. 

 

This perfect balance of style and substance isn’t always worth the extra effort of course—if you need to design a ‘file’ menu, no need to innovate on the power or ease of use vector, just create your File>>Open options and move on. However, the software areas that stand to offer significant productivity to those applications and domain experts that need it absolutely deserve the research and rigor to get it right.  I believe the current area in engineering that needs this level of attention is system design. 

 

From a Wikipedia standpoint, “system design” is the process of defining the architecture, components, modules, interfaces, and data for a system to satisfy specified requirements. In our world of engineering and science, this overlaps with systems engineering and system architecture roles in many companies. But the tools and technologies here are far from where they need to be as we look into the not-so-distant-future. The Internet of Things will add system design complexities like we’ve never seen—from security systems to distributed nodes, data centers to timing engines, co-simulation to deployed prototypes—we need a system design tool and view to manage heterogeneous, distributed, intelligent systems. And to be blunt,what you’ve got today won’t cut it. 

 

What you’ve got from NI won’t cut it. What you’ve got from the math vendors won’t cut it. What you’ve got from the box instrumentation vendors won’t cut it.

 

Today, NI asks you to leave your development environment to discover, configure, and manage your hardware systems. Math vendors take a simulation only or simulation-centric view that doesn’t make sense for real-world prototyping or deployments. Test and measurement vendors take a narrow approach to system design, perhaps only focusing on the physical layer or wireless systems with no inclusion of necessary implementation flows or supporting I/O. Industrial vendors appear to be closer to providing a visual representation of your hardware system only to let you down once you try to act on that information through compilation or application logic. Each option today fails you in style, substance, or sadly sometimes both. 

Capture.PNG

 

Your systems will soon require more. You deserve more. Fortunately, we understand these needs and have a rich roadmap to address the substantial gaps we see today. 

 

LV Comm System Designer_FIFO DataLink ss.png

 

 

The future of system design delivers style and substance…stay tuned.

 

Shelley headshot.jpg

 

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.

Today’s post is part of a series exploring areas of focus and innovation for NI software.

 

Jeff_LV1.0_Block Diagram_ss.jpg

As a child, I was obsessed with playing games on the TI-99/4A home computer my grandfather gave me when I was 12 years old. I programmed in my own games from programs listed in computer magazines and one of my most joyful childhood memories was the day I could afford a tape cassette to persist my programs and play my games without typing them in over and over. In many ways, my first years captured a pattern I now live out as a professional computer scientist—namely, figuring out how to optimize software against a bunch of interesting hardware capabilities.

 

The LabVIEW team charter is to maximize your productivity as you design, build, and deploy measurement and control systems. To do that, much as in my childhood, we seek to maximize how the software uses available computing resources so you can design, build, and deploy quickly and easily without having to constantly worry over the details of the underlying hardware.

 

LabVIEW must productively abstract hardware capability without obstructing system performance. 


LabVIEW was a pioneering programming language that cleanly abstracted critical computer science details such as explicit memory management, type management, and concurrent program execution. I believe there exists a natural tension between level of abstraction and performance. The ideal abstraction is infinitely productive with no performance penalty. Sadly, in the reality of hardware and software systems, higher-level abstractions often come with a commensurate performance penalty. However, as healthy abstractions mature, productivity goes up and performance penalties go down. Natural tension also lies between NEW hardware capabilities and software adoption. It’s fair to say that most software languages poorly abstract multicore hardware, never mind the more advanced GPUs and FPGAs, never mind elastic cloud fabrics. Software capability constantly lags behind hardware capability, especially when you add higher-level software abstractions into the mix.

 

With LabVIEW, we feel we have an excellent, mature abstraction for scheduling concurrent clumps of code while cleanly abstracting the data movement between them. We can compile and execute these independent clumps and handle the scheduling and data movement for you. Back when the Father of LabVIEW, Jeff Kodosky, and the early LabVIEW team were developing the algorithms for automatically “clumping” code together to then schedule, we chose the simple and eloquently descriptive name for that activity as “the Clumper.” The Clumper persists to this day and is one of the central algorithms for identifying the optimal code execution schedule. Now, as the hardware platforms LabVIEW supports have evolved, so too has the spirit of the Clumper. From the early days, LabVIEW was ideally suited to map code to multicore processors, but I would say we really hit our stride when we started targeting FPGAs from Xilinx. While mainstream computing and the EDA/FPGA spaces use different terms to describe the process, the computer science behind characterizing code execution and then deciding how to clump the code up and then schedule it out is the same regardless of hardware target. However, the characterization process that determines the ultimate performance of the executing code is unique to each hardware platform.

 

To maximize your productivity with LabVIEW and still deliver execution performance AND minimize FPGA resource utilization, we have invested heavily in creating a robust system to characterize LabVIEW code on a variety of Xilinx FPGAs. Our system characterizes “clumps” of LabVIEW code as we compile it and then runs it through Xilinx’s Vivado flow. To give you an idea of scale, we have a large execution farm that compiles designs for Xilinx FPGAs, such as the Kintex-7 and ZYNQ SoC. The farm does HUNDREDS of THOUSANDS of compiles to characterize the timing and resource usage of core components. We also have hundreds of sample program inputs that we run through the clumping, code generation, and synthesis process. We instrument that process to track design characteristics of merit ranging from execution speed to utilization of key FPGA resources such as LUTs, DSP48s, and so on. From that, we can create a massive optimization search space that we use as inputs into our FPGA compiler. Recently, one of our key IP and compiler developers created an LDPC code design, a state-of-the-art approach to error-correction code that outperforms the traditional Viterbi algorithm. The search space for finding a great design is 2.3 X 10^32 possible implementations. Using a clever and patented decision space pruning technique, we found a solution that achieved 24 Gbps throughput in only three minutes. This technique, combined with Xilinx’s massive improvements in the speed of Vivado’s flow versus ISE, will greatly increase your productivity while delivering highly performant results. While there is always room to improve a compiler, we are proud of the innovations we have made with LabVIEW FPGA. You can check out the results via the LabVIEW Communications System Design Suite, the first product to include our newest technology for FPGAs (along with a host of other innovations).

 

I can’t say the innovations we have made in the FPGA compiler give me quite the same child-like joy that I had when I got my first disk drive, but it is pretty close. Stay tuned as we strive to find the “Genius of the And,” and deliver you the most productive abstractions AND the best executing results!

 

 

Review the features of LabVIEW Communications today.

 

 

Fuller headshot.jpg

Today’s Featured Author

David Fuller has nearly 20 years experience in software engineering and is currently NI’s vice president of application and embedded software. You can’t follow him on Twitter because he’s a software engineer.

 


Today’s post is part of a series exploring areas of focus and innovation for NI software.

 

Jeff_LV1.0_Block Diagram_ss.jpg

An internal goal at NI is to be a trusted advisor for engineers. In many cases, this perspective has focused on translating emerging technology trends into meaningful know-how for you, regardless of our products’ involvement. Microsoft’s switch from XP to Vista, multicore processing, and now the Internet of Things are examples where NI has partnered with industry leaders, including Intel and Microsoft to ensure that you understand the impact these trends had, or will have, on you.

 

Another example is field-programmable gate array (FPGA) technology, which is quickly becoming broadly relevant for both test and control applications. The FPGA silicon provides a rare combination of high-performance throughput and ultimate reliability, with a purely digital logic implementation that eliminates the need for an embedded OS. The main challenge of the FPGA, however, is how incredibly difficult it is to program. Currently, programming an FPGA requires a low-level language such as VHDL or expensive and unreliable code conversion utilities.

 

The LabVIEW graphical dataflow paradigm maps perfectly to the FPGA architecture. The primary attraction that LabVIEW provides for FPGA programming is abstracting a low-level programming paradigm into a high-level graphical representation. Even with that abstraction, LabVIEW users are telling us that:

 

  1. ) Mapping algorithms to the FPGA requires many iterations of manual code re-writes
  2. ) Compile times are long
  3. ) Performant applications still require an intimate knowledge of the FPGA architecture

 

As I’ve mentioned before, there are several investments we are making with LabVIEW (previously I’ve discussed UI and UX) to bring back the level of innovation that you expect. The newly-announced LabVIEW Communications System Design Suite brings to market a redesigned FPGA compiler that addresses all of the issues above. This new FPGA compiler is designed specifically for algorithm designers looking to map theoretical floating-point math to the FPGA. However, there are several components that you can apply more broadly, such as these three key innovations:

 

  1. ) A data-driven float-to-fixed compiler
  2. ) The multirate diagram, a new dataflow-based model of computation that enables the synchronous execution of code at varying rates
  3. ) A built-in profiler that provides compile recommendations based on input parameters - such as desired throughput

 

This redesigned FPGA compiler is indicative of the level of innovation going into LabVIEW. Although LabVIEW Communications is designed specifically for wireless prototyping, the majority of the innovation can be broadened to a much larger portion of the existing LabVIEW user base.

 

Review the features of LabVIEW Communications today.

 

Phillips headshot.jpg

 

Today’s Featured Author

Jeff Phillips considers LabVIEW as essential to his day as food, water, and oxygen. As senior group manager for LabVIEW product marketing at NI, Jeff focuses on how LabVIEW can meet the changing needs of users. You can follow him on Twitter at@TheLabVIEWLion.


Shelley Intro Image.JPG

I’m going to contradict myself in this month’s blog post. I don’t use clichés much, however I (usually) find them to be accurate and descriptive. (Note: because most clichés have become trite or irritating, we often forget that their novel origin was based in truth). Let me take you back to when I started at NI.

 

In the late ‘90s, reprogrammable silicon was considered mainstream across consumer, automotive, and industrial applications. Based on the critical invention of the XC2064 FPGA by Freeman and Vondershmitt, the FPGA was becoming a coveted technology for the compute power, field-upgradability, and performance capabilities. However, tools to program the FPGA prevented domain expert access: creating a technology that was too good to be true. Or so I thought.

 

In 2001, I began working with an in-development product we had demoed at NIWeek a few years earlier, but hadn’t released yet. This not-so-secret project, code named “RVI” or reconfigurable virtual instrumentation, was a graphical design approach to programming an FPGA. Having a computer science and math background, abstract and software-centric is more comfortable and familiar to me than meticulous hardware design. So the idea that you (or even a CS-person like me) could abstract a ton of silicon details and program the hardware with a productive tool like LabVIEW (rather than an HDL) seemed impossible.

 

This is where the contradiction begins. It wasn’t too good to be true; the cliché was wrong. It was good AND it was true. Luckily, I could rely on another well-known phrase used at NI to describe the innovation taking place: “the genius of the AND” inspired by author Jim Collins. With productive, graphical programming; system abstraction; AND hardware design for dedicated determinism including 25 ns I/O response, protocol customization, and rapid prototyping, LabVIEW FPGA breaks the cliché.

 

I’m not the only geek who gets excited about this capability. Stijn Schacht of T&M Solutions took advantage of the control accuracy of an FPGA to lift 20-metric-ton unbalanced trays of uncured concrete more than 6 meters while maintaining a strict accuracy of two millimeters. Because he used LabVIEW to get that precision from the FPGA, his team developed an application in only two months and was able to reuse the modular code for their next project.

 

Kurt Osborne at Ford Motor Company is a believer as well. Ford used LabVIEW FPGA to design and implement a real-time embedded control system for an automotive fuel cell system.

 

LabVIEW Communications.png

The LabVIEW Communications environment enables an entire design team to map an idea from algorithm to FPGA using a single high-level representation.

 

So what’s next? I encourage you to explore the latest cliché contradiction that takes FPGA design to the next level – LabVIEW Communications System Design Suite.

 

LabVIEW Communications is a complete design flow (with bundled software defined radio hardware) for wireless communications algorithms. This suite includes everything from an integrated FPGA flow, to an HLS compiler, to a new canvas (Multirate Diagram) for streaming algorithms, and an innovative way to explore your hardware system with the NI System Designer. The genius of the AND lives on in LabVIEW Communications.

 

Explore the latest cliché contradiction today at ni.com/labview-communications.

 

Shelley headshot.jpg

 

Today’s Featured Author

Shelley Gretlein is a self-proclaimed software geek and robot aficionado. As NI’s director of software marketing, you can find Shelley championing LabVIEW from keynote stages to user forums to elevator conversations. You can follow her on Twitter at @LadyLabVIEW.