Camera & Accessories Search

Saturday, November 22, 2008

Imaging Glossary

* 1394a: Also known as FireWire (see below).

* 1394b: Also known as FireWire-b (800 mb/s)

* Acquisition: Image acquisition refers to how a computer gets image data from a camera into the computer.

* Analog: Analog cameras do not have a digital output. These cameras generally provide a TV-like signal that needs to be digitized in the host computer if it is to be used in machine vision. Although analog cameras are still used widely in machine vision they are quickly being displaced by digital cameras, which provide a much higher performance machine vision solution. When comparing analog vs. digital cameras, the main differences are image quality, exposure control, speed, and ease of integration.

* Area scan: Area scan refers to a camera sensor consiting of a rectangular array of pixels. Area Scan cameras are sometimes called matrix cameras. By way of contrast, Line Scan cameras are those with a sensor comprising a single line of pixels (linescan camera).

* Autoiris (Auto Iris) : Some lenses, particularly those used in outdoor imaging, incorporate a galvanometer-type drive to automatically control the aperture, or iris, of the lens. There are basically two types of auto-iris: DC-type and video type.

  • Binning: Binning is the technique of combining pixels together on a CCD to create fewer but larger pixels. True binning combines charge in adjacent pixels in a manner that increases the effective sensitivity of the camera. Machine vision cameras do not generally have true binning functions.
  • Blob Analysis: a machine vision computer algorithm that identifies segmented objects according to geometrical properties such as area, perimeter size, color, etc.
  • Brightness: In reference to cameras, an offset setting applied equally to all pixels regardless of the pixel value. Similar to the brightness setting on a typical computer monitor or television. See “Offset”
  • Camera Link: One of the common digital camera hardware interface in the market today. It offers high-data transfer rates, but is limited by cable length and does not have a standard communications protocol. Camera Link is largely being displaced by more modern high-performance digital interfaces such as Gigabit Ethernet (GigE Vision).
  • CCD: An abbreviation for charge-coupled device. A CCD sensor is a light-sensitive semiconductor device, which converts light particles (photons) to electrical charge (electrons). CCD cameras are one of two dominant types of sensor technologies used in machine vision. The other sensor technology is called CMOS.
  • CMOS: Complementary Metal Oxide Semiconductor. CMOS refers to an image sensor technology that is manufactured using the same processes as computer chips. This technology works like a photodiode where the light ‘gates’ a current that that is representative of the amount of light impinging on each pixel. This differs significantly from CCD technology. There are a number of advantages in using CMOS sensors over CCD including cost, speed, anti-blooming, and programmable response characteristics (ie. multiple slope response). CCD’s also have certain advantages.
  • Dark Current: Dark current is the accumulation of electrons within a CCD or CMOS image sensor that are generated thermally rather than by light. This is a form of noise that is most problematic in low light applications requiring long exposure times.
  • DCAM: DCAM or IIDC is a software interface standard for communicating with cameras over FireWire. It is a standardized set of registers etc. If a camera is DCAM compliant then its control registers and data structures comply with the DCAM spec. Such a camera can be truly plug-and-play in a way that other cameras are not.
  • Decibel or dB: A logarithmic unit of measure. When used of digital cameras this unit is usually used for describing signal-to-noise or dynamic range.
  • Depth of Field (DOF): The maximum object depth that can be maintained entirely in focus. DOF is also the amount of object movement (in and out of best focus) allowable while maintaining a desired amount of focus.
  • Digital Imaging: Refers to the capture of a video image in such a way that the resulting image data is in digital format useful for analysis by a computer.
  • Dynamic Range: The ratio of the maximum signal relative to the minimum measurable signal often measured in decibels or dBs. Dynamic range is sometimes used interchangably with SNR. It can also refer to Optical Dynamic Range.
  • Exposure Time: This is the amount of time that the sensor is exposed to the light. This is the control that is used first (before gain and offset) to adjust the camera. In Labview, the shutter controls are a little confusing: there are ‘manual relative’, ‘manual absolute’, “One-push’ and “auto’ controls. Normally, you should use ‘manual absolute’ where each unit corresponds to 1 us of exposure time. When using the ‘relative’ controls, the units are different – 20us per unit. This control is called “shutter” in Labview and some DCAM controls.
  • Fast Lens: A lens that admits a lot of light. A lens with a low F-number. A typical fast lens will have a F-number of less than 1.2.
  • Field of View (FOV): The viewable area of the object under inspection. In other words, this is the portion of the object that fills the camera’s sensor.
  • FireWire: A standard computer interface and its various versions otherwise called IEEE 1394, IEEE-1394a, or IEEE-1394b. It is an especially fast serial interface that is low cost with plug and play simplicity of integration. It is currently the only interface for digital industrial cameras that is standardized both in hardware and software communications protocols.
  • Filter Driver: With respect to Gigabit Ethernet cameras, a filter driver, or “filter” is used to reduce the CPU burden when handling large volumes of data. The filter strips out, or “filters”, the image data from the Ethernet packets at the lowest level so that the CPU does not have to do this. Using a filter driver can significantly reduce the CPU load associated with image acquisition.
  • Frame Rate: Frame rate is the measure of camera speed. The unit of this measurement is “frames per second” (fps) and is the number of images a camera can capture in a second of time.
  • Frame Grabber (or Framegrabber): This is the industry name for the circuit board (usually a PCI card) that is an interface to connect analog cameras, or Camera Link cameras, to a computer system. With the wide range of FireWire and GigE Vision gigabit Ethernet cameras, which do not require such specialized interface cards, frame grabbers are generally no longer required.
  • Gaging (or Gauging): In reference to machine vision, this is non-contact dimensional examination and measurement of an object using an imaging system or machine vision camera.
  • Gain: This is the same as the contrast control on your TV. It is a multiplication of the signal. In math terms, it controls the “slope” of the exposure/time curve. The camera should normally be operated at the lowest gain possible, because gain not only multiplies the signal, but also multiplies the noise. Gain comes in very handy when you require a short exposure (say, because the object is moving and you do not want any blur), but do not have adequate lighting. In this situation the gain can be increased so that the image signal is strong.
  • Gigabit Ethernet: An industry standard interface, variously called ‘gige (gig-ee)’, ‘GbE’, ‘1000-speed’, etc., that is used for high-speed computer networks capable of achieving data transfer rates in excess of 1000 megabits per second. Gigabit Ethernet has been now adapted to high performance CCD cameras for industrial applications. This generalized networking interface is being adapted for use as a standard interface for high-performance machine vision cameras that is called GigE Vision.
  • GigE Vision: ‘GigE Vision’ is an interface standard from the Automated Imaging Association (AIA), for high-performance machine vision cameras. GigE (Gigabit Ethernet), on the other hand, is simply the network structure on which GiGE Vision is built. The GigE Vision standard includes both a hardware interface standard (Gigabit Ethernet), communications protocols, and standardized camera control registers. The camera control registers are based on a command structure called GenICam. GenICam seeks to establish a common software interface so that third party software can communicate with cameras from various manufacturers without customization. GenICam is incorporated as part of the GigE Vision standard. GigE Vision is analogous to Firewire’s DCAM, or IIDC interface standard and has great value for reducing camera system integration costs and for improving ease of use.
  • Global Shutter: Generally speaking, when some one says “global shutter”, they really mean “snapshot shutter”. See “Snapshot Shutter” below. In actuality, a global shutter starts all a camera’s pixels imaging at the same time, but during readout mode, some pixels continue to image as others are read out. (see Rolling Shutter, Snapshot shutter). For machine vision applications, snapshot shutter is generally a ‘must have’.
  • Gray Scale: refers to a monochrome image with gradations of grey. An 8-bit camera, for example would represent images in 256 shades of gray. A 12-bit camera would represent images in 4096 shades of grey.
  • Histogram: A graphical representation of the pixel values in an image. Generally the left edge of the image represents black, or zero, and the right edge represents white, or 256/4096. The histogram curve represents how many pixels of each luminence value.
  • IIDC: IIDC (DCAM) is a software interface standard for communicating with cameras over Firewire. It is a standardized set of registers etc. If a camera is IIDC compliant then its control registers and data structures comply with the IIDC spec. Such a camera can be truly plug-and-play in a way which other cameras are not.
  • Image Analysis: The software process of generating a set of descriptors or features by which a computer may make a decision about objects in an image.
  • Integration: generally refers to the task of assembling the components of a machine vision system (camera, lens, lighting, software, etc). Usually used as short form for “System Integration”. When used in reference to what the camera does, it is another word for exposure time (see Integration Time).
  • Integration Time: Also referred to as exposure time. This is the length of time that the image sensor is exposed to light while capturing an image. This is equivalent to the exposure time of film in a photographic camera. The longer the exposure time, the more light will be acquired. Low light conditions require longer exposure times.
  • Interlaced Scan: Refers to one of two common methods for “painting” a video image on an electronic display screen (the second is progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all the odd lines in the image, the other contains all the even lines of the image.
  • Interline Transfer: A CCD architecture where there exists an opaque transfer channel between pixel columns. Such a CCD does not require a mechanical shutter but spatial resolution, dynamic range, and sensitivity are reduced due to the masked column between light sensitive columns.
  • IR Lens: A lens that is specially designed so that chromatic aberrations in the infrared wavelengths are corrected. An IR-lens should be used in cases where both visible and IR illumination is being received by the camera; otherwise the resulting image would be blurred.
  • ISO 9000, 9002: Internationally recognized standards that certify a company’s manufacturing record keeping. ISO accreditation does not imply any product quality endorsement, but it is rather an acknowledgement of the manufacturing and/or engineering record keeping practices of the accredited company.
  • Jumbo Frames: With respect to Gigabit Ethernet, Jumbo frames refers to the data packet size used for each Ethernet frame. Since each data frame must be handled by the operating system, it make sense to use large data frames to minimize the amount of overhead when receiving data into the host computer. Such large data blocks are called Jumbo frames.
  • Linescan (or Linear Array): A line scan, or linear array camera has a single row of pixels and captures an image by scanning an object that moves past the lens. Conceptually similar to a desktop scanner (compare “area scan”).
  • Machine Vision: Machine vision is the application of cameras and computers to cause some automated action based on images received by the camera(s) in a manufacturing process. Generally, the term “machine vision” applies specifically to manufacturing applications and has an automated aspect related to the vision sensors. However, it is common to use machine vision equipment and algorithm outside of the manufacturing realm.
  • Megapixel: Refers to one million pixels - relating to the spatial resolution of a camera. Any camera that is roughly 1000 x 1000 or higher resolution would be called a mega pixel camera.
  • Manual Focus: Refers to a lens which requires a human user to set the focus as opposed to an auto-focus lens which is controlled via a computer or camera.
  • Manual Iris: Refers to a lens which requires a human user to set the iris as opposed to an auto-iris lens which is controlled via a computer or camera.
  • Microlens: A type of technology used in some interline transfer CCD’s whereby each pixel is covered by a small lens which channels light directly into the sensitive portion of the CCD.
  • Morphology: The mathematics of shape analysis. An algebra who variables are shapes and whose operations transform those shapes.
  • Motorized Lens: A lens whereby zoom, aperture, and focus (or one or more of these) are operated electronically. Usually, a computer operated controller is used to drive such lenses. The controller often has an RS-232 port through which a camera, or computer, controls the lens.
  • Network Adaptor - another word for the Ethernet interface card or port used found on many computers..
  • OCR: stands for Optical Character Recognition and refers to the use of machine vision cameras and computers to read and analyze human-readable alphanumeric characters to recognize them.
  • OHCI: (Open Host Controller Interface) describes the standards created by software and hardware industry leaders–including Microsoft, Apple, Compaq, Intel, Sun Microsystems, National Semiconductor, and Texas Instruments–to assure that software (operating systems, drivers, applications) works properly with any compliant hardware.
  • Offset: This is the same as the brightness control on your TV. It is a positive DC offset of the image signal. It is used primarily to set the level of “black”. Generally speaking, for the best signal, the black level should be set so that it is near zero (but not below zero) on the histogram. Increasing the brightness beyond this point just lightens the image but without improving the image data.
  • Pixel: An abbreviated form of picture element. The individual elements that make up a digitized image array.
  • Progressive Scan: Also known as non-interlaced scanning, progressive scan is a method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame (each image now called a field) are drawn alternatively.
  • Readout: Readout refers to how data is transferred from the CCD or CMOS sensor to the host computer. Readout rate is an important specification for high-resolution digital cameras. Higher readout rates mean that more images can be captured in a given length of time.
  • Region of Interest: Region of interest readout (ROI) refers to a camera function whereby only a portion of the available pixels are read out from the camera. This is also referred to as “partial scan” or “area of interest” (AOI).
  • Rolling Shutter: Some CMOS sensors operate in “rolling shutter” mode only so that the rows start, and stop, exposing at different times. This type of shutter is not suitable for moving subjects except when using flash lighting because this time difference causes the image to smear. (see Global Shutter, Snapshot Shutter).
  • Sensitivity: A measure of how sensitive the camera sensor is to light input. Unfortunately there is no standardized method of describing sensitivity for digital CCD or CMOS cameras, so apples-to-apples comparisons are often difficult on the basis of this specification.
  • Sensor Size: The size of a camera sensor’s active area, typically specified in the horizontal dimension. This parameter is important in determining the proper lens magnification required to obtain a desired field of view. The primary magnification (PMAG) of the lens is defined as the ratio between the sensor size and the FOV. Although sensor size and field of view are fundamental parameters, it is important to realize that PMAG is not.
  • Smart Camera: Sometimes called “intelligent camera”, or “smart sensor”, the term smart camera refers to a camera with a built-in computer running image processing software in a single compact package capable of doing some simple machine vision tasks.
  • Snapshot shutter: Sometimes called a global shutter, snapshot shutter refers to an electronic shutter on CCD or CMOS sensors. A snapshot shutter is a feature of the image sensor that causes all of the pixels on the sensor to begin imaging simultaneously and to stop imaging simultaneously. This feature makes the camera especially suitable for capturing images of moving objects. (see Rolling Shutter, Global Shutter).
  • Spatial resolution: A measure of how well the CCD or camera can resolve small objects. Usually used relating not only to the pixel resolution, but also to lens resolution — ie the resolution of the whole optical system. See also High Resolution.
  • System Integrator: A company or person who provides turnkey vision systems using cameras, computers, software, and possibly robotics and other mechanical hardware usually aimed at a specific customer application and installation.
  • Sync: Refers to an external signal generated by a camera than can be used to synchronize the camera with outside events such as flash illumination, or other cameras.
  • Trigger: An input to an industrial digital camera than initiates the image capture sequence. Otherwise, an electrical signal or set of signals used to synchronize a camera, or cameras, to an external event.
  • Video-type auto iris: There are two major types of auto-iris lenses: DC-type, and video-type. The video-type auto-iris requires a video signal to determine how far to open the iris on the lens.
  • Working Distance (WD): The distance from the front of the lens to the object under inspection.

Friday, November 21, 2008

Integrating thermal imaging into surveillance systems

"Defense & Security

Integrating thermal imaging into surveillance systems

Glen Francisco

Thermal imaging camera systems can be merged with existing surveillance technologies and new image-processing algorithms to protect critical infrastructures more efficiently.

With the increased concern over terrorist threats at critical infrastructure sites, installing and operating comprehensive detection, management, and control systems has become imperative. This can be achieved by selecting appropriate components from the most powerful surveillance technologies available and using each to create highly-effective security systems. Thermal imaging, automated software detection, immersive visual assessment, and wide-area command and control are presently considered the four building blocks of a completely integrated security system.

It is now recognized that many US energy plants, commerce hubs, and other key sites –mostly monitored by closed-circuit television (CCTV)– currently need to improve their detection and monitoring capabilities. When compared to modern methods, most conventional CCTV systems have major shortcomings, including limited all-weather situational awareness, impaired nighttime detection, and lack of early detection functionalities. They are also subject to operator fatigue or other human errors.1 Integrating a system using the four building blocks offers the means" to significantly decrease these shortcomings.

The popularity of thermal imaging camera use at critical infrastructure sites is largely due to their good performance under nighttime or challenging weather conditions, their ability to see through foliage, and their covert surveillance and long-range detection capabilities. In remote locations, the lack of adequate lighting is always a concern and can breach security in shadowy corners, dense foliage, or other dark areas. Visible CCTV cameras and short-wave infrared cameras have difficulty detecting intruders in dark areas because of their dependence on a visible light source. In contrast, thermal imaging can detect radiation in the infrared range of the electromagnetic spectrum. Since infrared radiation is emitted by all objects based on their temperature, thermal imaging cameras can pick up on warm objects which stand out well against cooler backgrounds. Humans and other warm-blooded animals are then easily detected against the environment, day or night. In addition, the thermal waves used by such cameras make it easier to detect threatening activities under inclement weather conditions. Even at long distances, these cameras remain the most effective choice for surveying large areas.

The performance of a critical infrastructure security surveillance system can also be improved by combining thermal imaging cameras, working either as stand-alone monitors or as part of a network, with advanced image-processing algorithms that can improve reliability while increasing the degree of system automation and the level of situational awareness.

Modern surveillance systems should be able to detect events, evaluate the degree of threat, and archive or provide real-time reports to a command center. In addition, a 3D-immersive video surveillance system, with either single, multiple, or pan/tilt/zoom cameras, can further enhance situational awareness for security personnel. This can be achieved by creating a 3D visual context that seamlessly merges “live” video streams from security cameras with a 3D representation of the monitored facility, with further enhancement possible using algorithms that can increase awareness and reliability to higher levels.

While 3D-immersive video surveillance provides situational awareness, a wide-area remote surveillance system integrates sensors of any type over very large areas to ensure effective responses to security threats. The wide-area surveillance can detect and respond to a series of alarms, while one (or more) 3D-immersive system(s) can manage an individual event.

The level of threat to critical infrastructures across the world is high, and these facilities are expected to remain vulnerable in the coming years. The value of a modernized surveillance system that increases security and safety at these sites is accordingly enormous. This is why automated thermal detection and software systems are presently considered a highly valuable addition to traditional security systems for protecting the public and assets to the fullest degree and with the highest confidence






Flir's Thermal Imaging Cameras -



Thermal Insulation and Fireproofing Materials

Thermal insulation and fireproofing materials reduce the flow of heat through the thickness of the material. They are typically fiber-based or foam structures prepared from thermally-stable materials. There are four basic material types of thermal insulation and fireproofing materials: fiberglass, glass wood, polymeric materials, cellulose fibers, and ceramics or refractories. Fiberglass is offered typically as batting or as a high-loft, flexible structure; however, board-like products are also suitable for many industrial applications. Glass wool is spun from slag, rock, glasses or minerals that have been melted and produced as fibers. Polymeric materials have high molecular weight materials and are often hydrocarbon based. They can be prepared into films, fibers, fabrics or foams. Cellulosic fibers are prepared from wood pulp, cotton and other natural resources. Refractories are hard, heat-resistant thermal insulation and fireproofing materials such as alumina cement, fire clay, bricks, pre-cast shapes, cement or monolithics and ceramic kiln furniture. Ceramics and refractories have high melting points are suitable for applications requiring wear resistance, high temperature strength, electrical or thermal insulation or other specialized characteristics.

There are five main forms for thermal insulation and fireproofing materials: bulk chopped fibers, textiles or fibrous mats, foam, board and block insulation, and films or foils. Bulk chopped fibers provide loose, flowable insulation that can be filled or applied into an application, or serve as the basis for a textile or mat. Textiles and fibrous mats are made by weaving, knitting, braiding or web extrusion. These thermal insulation and fireproofing materials can also be needlepunched, formed from a slurry (wet laid), or produced in other ways. The properties of finished products depend upon the manufacturing process, fiber material and fiber size. Foam insulation materials are made from low-density elastomers, plastics, and other materials with various porosities. They are used in a variety of architectural, industrial, medical, and consumer applications. Board and block insulation is made from a variety of base materials in the form of a sheet, strip, plate or slap. Thermal insulation and fireproofing materials are also prepared from films, foils or composite structures with foam, fabrics or other materials.

Thermal insulation and fireproofing materials differ in terms of specifications and features. Use temperature and thermal conductivity are two important parameters to consider. Use temperature is the range through which a material can be exposed without degradation of its structural or other required end-use properties. Thermal conductivity is the linear heat transfer per unit area through a material for a given applied temperature gradient. In terms of features, some thermal insulation and fireproofing materials are flame-retardant, electrically insulating, and chemical or fuel resistant. Others are UL approved, a designation from Underwriters Laboratories (UL). Hydrophilic (absorbent) and hydrophobic (waterproof) thermal insulation and fireproofing materials are also commonly available.

Miniaturised thermal IR technology enhances tactical UAV capabilities, by Cedip Infrared Systems - optics.org

Miniaturised thermal IR technology enhances tactical UAV capabilities, by Cedip Infrared Systems - optics.org: "Cedip Infrared Systems has announced that the first consignment of novel miniaturised thermal IR camera's to EADS N.V, for incorporation into their new Tracker UAV system, will be delivered during 2007.

Cedip Infrared Systems (www.cedip-infrared.com) has announced that the first consignment of novel miniaturised thermal IR camera's to the European Aeronautic Defence and Space Company (EADS) N.V, for incorporation into their new Tracker UAV system, will be delivered during 2007. Subject to receipt of several large contracts by EADS further contracts are expected by Cedip Infrared Systems during 2008.

The design of the Tracker UAV system (known in France under the name DRAC) is expected to significantly improve reconnaissance capabilities for civil and military authorities. The user-friendly system, which can be prepared for operation in a matter of minutes, offers excellent flight characteristics and robustness even under extreme operational conditions. Tracker is a tactical drone with hand-launchability for missions at close range. The system meets the highest performance requirements even under complex operational conditions.

The Tracker UAV system, developed by EADS, delivers state-of-the-art over-the-hill reconnaissance and surveillance, close range attack success analysis and remote location, identification, classification and tracking"

The M1-Night StalkIR™ Thermal Imaging System

The M1-Night StalkIR™ Thermal Imaging System brings the industry's most advanced thermal imaging capabilities together with IEC Infrared System's proprietary signal processing and electronics in a rugged, environmentally sealed pan & tilt positioning stage. This thermal infrared imaging system combines superior imaging capability with an integrated high-performance visual video camera to provide full day/night surveillance capability in any weather. The thermal imaging system is configurable for fixed, vehicle mount or drop-deployable applications, giving flexibility in any tactical situation. The M1-NightStalkIR™ series of thermal imaging cameras and surveillance systems are adaptable to a wide variety of situations and applications. This system is equally suited for mobile (vehicle) mounting, fixed installation, or drop-deployable use. Optional features such as AC power input and either wireless or fiber-optic video and data transmission allow this thermal imager to perform equally well in all configurations.

System Features

Mounting System

Payload

System Features
• Low-light visual camera standard with infrared image
• Single integrated payload enclosure for both infrared
and Visual imager
• Optional Image Intensified (I2) camera with optical zoom
• Optional GPS/compass, fiber optic, and
wireless capabilities
• Full 360° rotation (with optional slip ring)
• Programmable scan pattern
• On screen position display of imaging direction
and other tactical data
• IEC's Advanced Signal Processing (ASP) system
with thermal image colorization
• Handcontroller or PC software control (included)
• Networkable with IEC's patent-pending
IntrudIR Alert™
alarm/tracking system

Payload
The M1-NightStalkIR™ family of thermal imaging cameras and surveillance systems proved the ultimate in multispectral imaging capabilities. The imaging payload uses state-of-the-art uncooled thermal infrared detector technology, with lens options providing field of view from 18o to 1o (HFOV). Coupling this detector with IEC's proprietary Advanced Signal Processing (ASP) image processing hardware and algorithms complete the thermal imager and provide the sharpest, clearest colorized image (user selectable palettes) available today. A high-performance, low-light level visual camera (with 26X optical zoom) is standard, giving the M1-NightStalkIR™ a 24/7, all-weather imaging capability. An optional Gen III Image Intensified Camera with optical zoom can be added for even greater night vision in the visual spectrum, hile optional systems such as a laser range finder and GPS/compass systems can be added to enhance tactical awareness by providing precise location of observed targets.

Mounting System
The M1-NightStalkIR™ payload can be either fixed mounted or mated to a ruggedized, environmentally tight pan & tilt system, specifically engineered to perform in the most demanding military environments on earth. This high performance positioning device provides rotation rates up to 40°/sec, with precision to +/- 0.014°. All system functions can be controlled using a ruggedized hand controller (operable when wearing either Arctic wear or NBC protection), while all system data (camera pointing direction, system settings, GPS data, vector compass) is available in an on screen display, or through PC-based software (included). The thermal imaging system may also be networked and controlled using the Pelco command protocols, or through IEC's exclusive, patented
IntrudIR Alert™ system.