Archive for the ‘vision’ Category

From inspecting the packaging of consumer goods to surveilling traffic to identifying cells with fluorescence, vision techniques are increasingly being used in every industry imaginable. Whether you are using vision to improve the quality of your finished goods, guide your robot, or add traceability to improve your process, here are five considerations to keep in mind when you choose your vision system.

 

  1. Software is the key

    Ease of use is all about abstracting technology components in a system so that you can meet familiar application challenges as a domain expert. This means you can concentrate on your vision inspection while NI worries about how to make your applications work for different camera standards and take advantage of the latest hardware advancements.

    NI LabVIEW graphical programming software also provides a powerful and easy-to-learn environment (compared to text-based programming), and gives you access to hundreds of functions to enhance images, measure parts, identify objects, check for presence, and locate features through the NI Vision Development Module.

    imagea.png



    Through software, you can model system variations to see if your inspection will stand up to motion blur, changes in lightin, and camera      position. These common issues in vision systems can be seen in the image above.

  2. Choosing the right camera

    Each application is best suited to a certain type of image sensor, with options spanning area-scan, monochrome, and color sensors as well as specialty sensors such as thermal (infrared), 3D, and line-scan. It is important to be aware of inspection conditions when choosing among these different sensor types.  For example, inspecting quick-moving rolls of textile requires a line-scan sensor while measuring hot metal in a dusty, dark environment requires a thermal camera.

    NI has made it a priority to support the most widely used imaging standards and strives to integrate support for new technologies. With the NI Vision Acquisition Software driver package, you can use a common framework to acquire images from smart cameras; traditional plug-in frame grabbers using analog, parallel digital, and Camera Link standards; and consumer buses such as GigE Vision, USB, and IEEE 1394. These drivers are the first to natively support image acquisition from GigE Vision and IEEE 1394 cameras in real time.

  3. Intelligent vision through industrial connectivity.....


To read about numbers 3 through 5 as well as dive deeper into the first too points, check out this article on vision systems>>>

NI India Graphical System Design Achievement Awards Humanitarian Award Winner

 

Every day we rely on our eyes to observe and analyze objects around us, and guide us to a pathway that will prevent collision. Sometimes our eyes falter and we need assistance in figuring out where to go, so researchers at IIT Kharagpur created a vision-based electronic navigation system that can help.

 

More commonly known as an electronic travel aid (ETA), the system helps the visually impaired navigate the environment easily and more efficiently. Using an NI 1762 Smart Camera and NI LabVIEW software to identify and classify living, nonliving, moving, and other material objects within a 5 m range, the ETA calculates distance from object to wearer.  The system prepares alternate navigation paths around the objects and communicates the path to the user through speech messages. These messages can be programmed in different languages, which is a capability that many vision-based electronic navigation systems lack.

iita.png

The entire system is based on LabVIEW software and the NI 1762 Smart Camera using the NI Vision Builder for Automated Inspection. This lets the system be compact and lightweight while highly effective. The system successfully detects human presence, chairs, tables, beds, television sets, refrigerators, doors, cupboards, and telephones. When the human face is not in the camera’s field of view, the system can still understand a human’s presence. Now you really don’t have to watch where you’re going anymore!

 

 

>> See how they did it.



Happy Valentine’s Day!  Each year we share this chocolate and flower filled day with the ones we love, and this year students at Monash University in Malaysia have made it easier for you to say, "I love you" using sign language.

 

With LabVIEW and the NI Vision Development Module, students improved a sign language translator based on work done by former Monash students.  Students previously used MathWorks, Inc. MATLAB® software, but this ultimately led to delayed translation, image processing, and recognition.  By switching to LabVIEW, they experienced little lag time between the sign language and the translation because of the software’s ability to perform parallel processing.  This allows for almost instantaneous recognition from finger and hand movements to translation.  This new and improved sign language translator can translate Malaysian sign language in real-time with 80 percent accuracy. 

 

 

 



rinia.jpg

 

>> Check out the case study here. 

Using the NI LabVIEW Real-Time Module, the NI Vision Development Module, and the NI 3110 embedded controller to control and aim high-energy lasers, a small group of engineers and scientists at LaserMotive demonstrated the ability to beam hundreds of watts of energy at a distance of up to 1 km, and over a kilowatt at shorter distances. 

Page 15_Fig 1_Q4_INL.png

>> Read more about LaserMotive and how the company found a way to transmit power wirelessly and supply remote locations with energy.  

A 3D display isn’t a new discovery, but its recent surge in popularity is clear, from 3D movie releases to expensive 3D TVs. Enhancing the optical illusion of depth perception is a unique process. The technique essentially happens by presenting two offset images separately to the left and right eye of the viewer. To make the offset images create depth, you then put on those funny looking glasses. Though attractive, these systems are expensive and lack interactivity.

 

The professors at Tsinghua University set out to change those deficiencies. Carrying out the task of creating a new 3D display system that was interactive and used real objects was a challenge. The process began with creating a virtual model. The team used NI LabVIEW software to read the model document and subsequently set parameters in order to project a new image. The new image was then displayed on an inverted optical structure. Once the image was displayed, they used a USB camera, NI PXI hardware, and NI Vision assistant to recognize movement and control the 3D display.

The final steps were to create a system that would actually show the 3D image. The team used a turntable controlled with PXI hardware and the NI 1764 Smart Camera. Together, these tools captured images and gathered information from all directions while the object rotated, allowing users to choose images they wanted to exhibit for the final 3D display.


displayrealobject.jpg

     3D display of real object


By using LabVIEW software with tools such as NI Smart Camera and Vision Assistant, professors at Tsinghua University were able to conquer their 3D challenge. They created a 3D system display that was interactive, inexpensive, and didn’t require special glasses. 

 

>>Learn more about LabVIEW and the 3D display system

We just introduced new additions to NI reconfigurable I/O (RIO) technology – a reconfigurable Camera Link frame grabber, a motion module for the NI CompactRIO platform, and six new custom brushless DC motors.

 

Ideal for advanced inspection and imaging applications, the NI PCIe-1473R frame grabber is a PC-based embedded vision board that combines FPGA technology with a Camera Link interface. The new frame grabber’s onboard FPGA can be programmed with the NI LabVIEW FPGA Module for custom image processing and analysis in real time. It also features a high-bandwidth 850 MB/s Camera Link bus to support a range of Camera Link configurations and includes Power over Camera Link (PoCL) wireless capabilities, removing the need for additional cables or external power supply.

 

For advanced motion control challenges, the NI 9502 motion drive module for CompactRIO can power brushless, stepper or brushed servo motors directly with NI C Series modules to provide a compact, highly customizable motion drive solution. With 4 A continuous/8 A peak current, multiple commutation modes and direct connectivity with our six new three-phase brushless DC motors and integration with LabVIEW FPGA, the NI 9502 helps engineers implement proprietary custom drive control algorithms, eliminating the need for custom firmware from a drive manufacturer.

 

Check out the frame grabber at www.ni.com/vision and learn more about the drive modules and motors at ni.com/motion.

NI today announced the expansion of the NI Smart Camera family with seven new models and improvements including color and high-resolution options, improved processing power and IP67 ratings.

 

The new NI 177x Smart Cameras feature a 1.6 GHz Intel® Atom™ processor, which delivers processing speeds four times greater than other NI Smart Cameras, and a real-time operating system for reliability and determinism.  The cameras offer new sensor options to deliver higher resolution image acquisition, rugged mechanical housings, M12 connectors, and dust- and water-resistant lens covers that have earned the cameras an IP67 rating. They can be programmed with LabVIEW graphical programming and the NI Vision Development Module for advanced customization and integration with other National Instruments hardware.

 

Learn more at www.ni.com/smartcamera.

One of the common misconceptions associated with blindness is that it refers to one’s complete inability to see. However, a “blind” person may have some degree of residual vision or be able to detect changes in contrast. And just as there are scientists out there working to develop limb prosthetics, there are also some working on visual prosthetics, which are electronic aids that support sight for visually impaired people.

 

Scientists at the University of Oxford’s Department of Clinical Neurosciences built a prototype for a pair of glasses that uses the individual's ability to sense changes in contrast. They acquire video feeds from head-mounted cameras, and process the image data to detect nearby objects of interest such as people, sign posts, or obstacles to navigate. The detected objects are simplified and displayed back to the user via banks of LEDs attached to the head-mounted display.

 

head.jpg

 

The goal is to incorporate this technology into a pair of electronic glasses. They already have a name for them: Smart Specs. These glasses will give visually impaired individuals more independence by helping them identify nearby objects and navigate their surroundings. When put into production, Smart Specs will cost about the same as modern smartphones, a much less expensive option than having to train a guide dog.

 

 

The team developed the simulation software using LabVIEW and the NI Vision Development Module because it provided ready-to-run vision analysis functions and drivers for acquiring, displaying, and logging images from a multitude of camera types. They also used the NI USB-8451 interface to acquire data from a gyroscope and control the LEDs, thereby minimizing hardware requirements.


>> Check out the full case study to read more about this application and watch the video.