Archive for the ‘robotics’ Category

NI announced today that ImagingLab, an NI Alliance Partner specializing in vision and robotics integration, has added a new robotics library for Toshiba Machine robots. This library of graphical functions is specially optimized for use with NI LabVIEW system design software, which helps the user control all aspects of a robotics system without requiring complex robotics programming expertise.

 

 

Because engineers can use a single LabVIEW application to control everything from part handling to advanced measurements, it is ideal for those who would normally avoid using robotics in their applications because of complexity concerns.

 

 

“Using the ImagingLab Robotics Library for Toshiba Machine robots, engineers and scientists can take advantage of the productivity and reliability of LabVIEW,” said Jamie Smith, director of embedded systems marketing at National Instruments. “Engineers can quickly integrate robotics into advanced applications such as laboratory automation, precise component assembly and complex testing.”

 

 

>> Learn more about the ImagingLab robotics libraries here.

 

 

 



The game of chess is complex and requires intense strategy, and developing a robot that can play the game requires the same. So when students at IUT1 de Grenoble decided to create a chess-playing robot to compete in the French Cup of Robotics 2011 Chess’up! Competition, they turned to NI LabVIEW for help.

 



 

iutga.jpg

 

By using LabVIEW to design and program software, the students were able to create easy communication between the robot’s NI sbRIO-9632 module “brain” and its movements. The students were able to operate complex platforms simply and quickly because of the uncomplicated approach to programming that LabVIEW provides.

 

The chess-playing robot took 41st place in the competition, a very encouraging success for students that received their BAC two to three years prior and were competing for the first time. The robot ranked higher than many engineering schools, and the results were the best IUT1 Grenoble has seen in its eight years in the competition.

 

 

We know that remote controlled robots are being developed and played with every day, but when it comes to what these robots can do, sometimes we’re limited. Thanks to students and researchers at Temasek Polytechnic, creating a robot that is capable of surveillance, motion, and awareness of its surroundings proves there are no more limitations.

 

 

They used NI PXI hardware to control and operate the webcam, sensors, and motors essential for surveillance, and LabVIEW software to show the webcam image to the controller. Using this technology, users can remotely control the robot with a PC and the controller can detect where the robot is going using sensors. 

 

robot.jpg

“Can” photo by Mo Riza

 

 

>> Read the full case study. 



NI engineer and community member RoboticsME built a robot with the LabVIEW RoboticsStarter Kit that can better maneuver around obstacles with the help of Microsoft’s Xbox Kinect.

 

The LabVIEW Robotics Starter Kit is an out-of-the-box mobile robot platform complete with sensors, motors, and NI Single-Board RIO hardware for embedded control. RoboticsME attached a Kinect to the robot's upper body with a time-of-flight camera to get a 3D image of its surroundings. The Kinect can see objects in the robot’s surrounding, such as chair legs, but it cannot see objects in close proximity. That’s where the robot’s built-in sonar sensors come in. They can sense objects that are in close proximity to the robot, like in the robot’s blind spot.

 

RoboticsME used FitPC running Windows Embedded 7 and LabVIEW Robotics to collect and process Kinect data while the LabVIEW FPGA Module collects and processes the sonar data. The LabVIEW Real-Time Module fuses the data from the Kinect and the sonar sensor and performs obstacle avoidance.

 

>> Check out a video of the robot in action and download the code here.

When mechanical engineering students at Massachusetts Institute of Technology (MIT) took on their annual robotics challenge this year, they did so with some help from NI. Professors wanted to provide undergraduate students with a wide variety of programming experience to help students develop sophisticated code for their robot designs.

 

In the past, MIT had positive experiences with LabVIEW for robotics research, but they had never implemented these products into an undergraduate robotics course. After only 5 percent of students took part in the coding process, MIT had high hopes that the adoption of graphical programming would help increase the amount of students who were coding in course projects and ultimately enhance the overall robotic design experience.

 

Before the robotics competition even took place, students worked on various coding projects with LabVIEW and CompactRIO. In a short time, students were using LabVIEW as well as LabVIEW MathScript Code and NI Vision Assistant to incorporate vision-guided motion into their projects. Students quickly became familiar with the effective design integration and ease of use in these products.

 

By the time student teams began to work on their robotic system, the percentage of student programming had already increased significantly. Students had to design,

build, and control a robotic mechanism that could actually perform several operations. With the help of LabVIEW software, each student was able to take part in a portion of the programming. In the end, each team was able to get their robot working either independently or in manual mode.

 

mitbbb.jpg

The gantry robot was used for the “Operation Plug the Oil Well” design contest at the end of the course.

 

Students found this new integration of LabVIEW programming and NI hardware to be quite a positive experience, and the overall percentage of students programming with LabVIEW increased to 30 to 40 percent. More students were able to participate and, more importantly, students found the programs very easy to use.

 



>> Read more about how MIT students adopted LabVIEW to meet their course challenge titled “Operation Plug the Oil Well”

For the love of robotics, community members are designing some pretty incredible VIs! This week we’re featuring community member MarcoPolo5, who developed a VI that allows you to simulate a robot using a tree control and 3D picture control. The program uses a parent-child relationship between its elements, meaning if a translation or rotation is applied to the parent object, it will automatically have the same effect on all the connected child objects. Front panel joint controls are available for manually manipulating joints, and can also be linked to a secondary program for path simulations. The current element object types supported are sphere, box, cylinder, cone, and CADfiles (ASE, STL, and VRML). Click on the link below to get the code and further operating instructions!

 

>> Download the code here: Robotics and 3D Object Simulator

 

robot_simulator.PNG