Camera & Accessories Search

Monday, December 29, 2008

A Look At The Flip Video MinoHD Camcorder

By Judith Allison

Many people have not heard about the new Flip Video Mino Hd Camcorder. It is truly one of the best camcorders on the market. It is basically the most thin camcorder that exists in the electronics industry today. The other amazing thing about this camera is that it is extremely compatible with YouTube. With the click of a single video button your recordings are instantly uploaded to You Tube. This is a great feature for anyone that has a YouTube channel.

If you have a Mac computer there is no need to worry because this camcorder is also available. The best thing about this about this camcorder is that it works well with the PC and the Mac. This compatibility is a feature that not all camcorders have.

Many people think that there are almost no bad features of this camcorder. The fact of the matter is that the MinoHD camcorder is the best camera on the market and has almost no flaws. The only thing that many people do not like about this camcorder is that there is no expansion slot for the storing of their videos.

A great thing about the MinoHD Camcorder is that it is extremely easy to use. Many people are new to using camcorders and are easily confused by expensive camcorders. The MinoHD Camcorder is extremely affordable and is very easy to understand how to use. This camera has an extremely sleek design and if you are one of those people that care about what their camera looks like this will be an excellent camera for you. Many people say that the look of the Mino camera is similar to the look of the Flip Video MinoHD Camcorder

If you are curious I will tell you about some of the technical specs that this camera has. The size of the LCD screen is an astounding 1.5 inches. If you like filming in low light settings this camera is great for you. The frame rate of this camera is extraordinary because it takes film at a rate of 30 frames per second.

The lens on this unit is a fixed foxu lens and the lowest aperture on it is f/2.4. If you know what f settings mean for a regular camera you know that an f/2.4 lens can capture low light very well.

If you are wondering about the battery, the lithium battery has a battery life of two to four hours. The unit will also run on two AA batteries. AA Alkaline batteries give you about two hours of battery life. Energizer lithium batteries will give you about five hours of life.

Many people want a camera that has a long battery life. This camera has a lithium battery and has a battery life of two to four hours. The best thing about this unit is that it will run on two single double A batteries. AA Alkaline batteries will give you an additional two hours of battery life. Energizer lithium batteries can even give you up to five hours of additional battery life.

For many people having a camera that powers up quickly is extremely important for capturing the most special moments. The Flip Video MinoHD Camcorder is perfect for these kinds of people because it powers up in a matter of 4 seconds. The Minimum windows requirements for this unit is an Intel Pentium 4 2.0 GHz, Windows XP SP2 with 512MB Ram or vista with 2GB of RAM, and USB port. The majority of the computers on the market have these features so this is really nothing to worry about.

For many people they like to share videos online. If you are tis type of person this camera will work extremely well for you because it will give you unlimited email videos and greeting cards. If you choose to use this camera it is extremely easy to upload your videos to your favorite websites. If you are a person that likes to edit videos this camera is great for you. This is because you can do video editing right on the camera. By using this feature on the camera you can create titles, credits, and even add your own music to your favorite videos. Another great thing that you can do with this feature is create a still photo snap.

Overall, the Flip Video MinoHD Camcorder is one of the best cameras on the market. Everyone who buys this camera loves it and they even like to give it as a gift to their loved ones.

About the Author:

Tuesday, December 16, 2008

The Canon's Camera guide about Pixels.

By Tim Harris

Buying a new Canon camera can be very confusing as there are so many terms that sound like a foreign language. In order to be able to make a good choice in your purchase of a Canon digital camera, you will need to know the meaning of some basic terms pixels, white balance, dpi and ppi. All these terms will in some ways affect the quality of the image that a Canon Digital camera will produce. In this guide, we will explain about what a pixel means and how it will determine the quality of the image.

Pixel is the short for the term PIicture-Element. All Canon Digital Camera takes pictures and format them as small squares. A digital image might look seamlessly like a normal photograph but if you magnify it to a close range, it actually comprises of a mosaic of millions of small and different colored squares stitched together. Each pixel is further defined by 3 numbers from the range of 0 to 255 from the red, green and blue color channels. For example, a pixel can be defined by 35 red channel, 70 green channel and 255 blue channel. By using this coding system, there are 16 million possible combinations of color coding. In computer terminology, pixel colors are referred by an 8 bit (bytes) number. Therefore a computer will recognize the color of a pixel by 3 eight bytes numbers, a number for each of the 3 color channel.

The use of pixel count is not only confined to digital imaging. All camera manufacturers nowadays also categorized their camera quality also according to the pixel count their camera can take. Depending on the type and model of the camera in question, most canon digital camera can take between 5 to 10 megapixels pictures. A megapixel is equivalent to a million pixels. What this mean is that a 5 megapixel camera can take an image with 5 million pixels in it. In terms of image printing quality, the more pixels a digital image has, the sharper the printed version of the image is.

You need also to consider whether when talking about pixels counts that you are referring to "Total Pixels" count or "Effective Pixels" count. Total Pixel count refers to the fact that all pixels that we see in a digital image are counted. Because in the final digital image, the pixels at the edge of an image are not used. As such the actual pixel count will be less. Effective Pixel count refers to the count only after all these edge pixels have been discounted.

Depending on the size of the pictures that you wanted to print, a 5 megapixels Canon camera, makes very good quality 5" X 7" printouts and decent 8" X 10" printouts. But if you are going to make 8" X 10" printouts most of the time, then a 8 megapixels or 10 megapixels Canon camera will be more ideal choice to purchase.

About the Author:

Saturday, November 22, 2008

Imaging Glossary

* 1394a: Also known as FireWire (see below).

* 1394b: Also known as FireWire-b (800 mb/s)

* Acquisition: Image acquisition refers to how a computer gets image data from a camera into the computer.

* Analog: Analog cameras do not have a digital output. These cameras generally provide a TV-like signal that needs to be digitized in the host computer if it is to be used in machine vision. Although analog cameras are still used widely in machine vision they are quickly being displaced by digital cameras, which provide a much higher performance machine vision solution. When comparing analog vs. digital cameras, the main differences are image quality, exposure control, speed, and ease of integration.

* Area scan: Area scan refers to a camera sensor consiting of a rectangular array of pixels. Area Scan cameras are sometimes called matrix cameras. By way of contrast, Line Scan cameras are those with a sensor comprising a single line of pixels (linescan camera).

* Autoiris (Auto Iris) : Some lenses, particularly those used in outdoor imaging, incorporate a galvanometer-type drive to automatically control the aperture, or iris, of the lens. There are basically two types of auto-iris: DC-type and video type.

  • Binning: Binning is the technique of combining pixels together on a CCD to create fewer but larger pixels. True binning combines charge in adjacent pixels in a manner that increases the effective sensitivity of the camera. Machine vision cameras do not generally have true binning functions.
  • Blob Analysis: a machine vision computer algorithm that identifies segmented objects according to geometrical properties such as area, perimeter size, color, etc.
  • Brightness: In reference to cameras, an offset setting applied equally to all pixels regardless of the pixel value. Similar to the brightness setting on a typical computer monitor or television. See “Offset”
  • Camera Link: One of the common digital camera hardware interface in the market today. It offers high-data transfer rates, but is limited by cable length and does not have a standard communications protocol. Camera Link is largely being displaced by more modern high-performance digital interfaces such as Gigabit Ethernet (GigE Vision).
  • CCD: An abbreviation for charge-coupled device. A CCD sensor is a light-sensitive semiconductor device, which converts light particles (photons) to electrical charge (electrons). CCD cameras are one of two dominant types of sensor technologies used in machine vision. The other sensor technology is called CMOS.
  • CMOS: Complementary Metal Oxide Semiconductor. CMOS refers to an image sensor technology that is manufactured using the same processes as computer chips. This technology works like a photodiode where the light ‘gates’ a current that that is representative of the amount of light impinging on each pixel. This differs significantly from CCD technology. There are a number of advantages in using CMOS sensors over CCD including cost, speed, anti-blooming, and programmable response characteristics (ie. multiple slope response). CCD’s also have certain advantages.
  • Dark Current: Dark current is the accumulation of electrons within a CCD or CMOS image sensor that are generated thermally rather than by light. This is a form of noise that is most problematic in low light applications requiring long exposure times.
  • DCAM: DCAM or IIDC is a software interface standard for communicating with cameras over FireWire. It is a standardized set of registers etc. If a camera is DCAM compliant then its control registers and data structures comply with the DCAM spec. Such a camera can be truly plug-and-play in a way that other cameras are not.
  • Decibel or dB: A logarithmic unit of measure. When used of digital cameras this unit is usually used for describing signal-to-noise or dynamic range.
  • Depth of Field (DOF): The maximum object depth that can be maintained entirely in focus. DOF is also the amount of object movement (in and out of best focus) allowable while maintaining a desired amount of focus.
  • Digital Imaging: Refers to the capture of a video image in such a way that the resulting image data is in digital format useful for analysis by a computer.
  • Dynamic Range: The ratio of the maximum signal relative to the minimum measurable signal often measured in decibels or dBs. Dynamic range is sometimes used interchangably with SNR. It can also refer to Optical Dynamic Range.
  • Exposure Time: This is the amount of time that the sensor is exposed to the light. This is the control that is used first (before gain and offset) to adjust the camera. In Labview, the shutter controls are a little confusing: there are ‘manual relative’, ‘manual absolute’, “One-push’ and “auto’ controls. Normally, you should use ‘manual absolute’ where each unit corresponds to 1 us of exposure time. When using the ‘relative’ controls, the units are different – 20us per unit. This control is called “shutter” in Labview and some DCAM controls.
  • Fast Lens: A lens that admits a lot of light. A lens with a low F-number. A typical fast lens will have a F-number of less than 1.2.
  • Field of View (FOV): The viewable area of the object under inspection. In other words, this is the portion of the object that fills the camera’s sensor.
  • FireWire: A standard computer interface and its various versions otherwise called IEEE 1394, IEEE-1394a, or IEEE-1394b. It is an especially fast serial interface that is low cost with plug and play simplicity of integration. It is currently the only interface for digital industrial cameras that is standardized both in hardware and software communications protocols.
  • Filter Driver: With respect to Gigabit Ethernet cameras, a filter driver, or “filter” is used to reduce the CPU burden when handling large volumes of data. The filter strips out, or “filters”, the image data from the Ethernet packets at the lowest level so that the CPU does not have to do this. Using a filter driver can significantly reduce the CPU load associated with image acquisition.
  • Frame Rate: Frame rate is the measure of camera speed. The unit of this measurement is “frames per second” (fps) and is the number of images a camera can capture in a second of time.
  • Frame Grabber (or Framegrabber): This is the industry name for the circuit board (usually a PCI card) that is an interface to connect analog cameras, or Camera Link cameras, to a computer system. With the wide range of FireWire and GigE Vision gigabit Ethernet cameras, which do not require such specialized interface cards, frame grabbers are generally no longer required.
  • Gaging (or Gauging): In reference to machine vision, this is non-contact dimensional examination and measurement of an object using an imaging system or machine vision camera.
  • Gain: This is the same as the contrast control on your TV. It is a multiplication of the signal. In math terms, it controls the “slope” of the exposure/time curve. The camera should normally be operated at the lowest gain possible, because gain not only multiplies the signal, but also multiplies the noise. Gain comes in very handy when you require a short exposure (say, because the object is moving and you do not want any blur), but do not have adequate lighting. In this situation the gain can be increased so that the image signal is strong.
  • Gigabit Ethernet: An industry standard interface, variously called ‘gige (gig-ee)’, ‘GbE’, ‘1000-speed’, etc., that is used for high-speed computer networks capable of achieving data transfer rates in excess of 1000 megabits per second. Gigabit Ethernet has been now adapted to high performance CCD cameras for industrial applications. This generalized networking interface is being adapted for use as a standard interface for high-performance machine vision cameras that is called GigE Vision.
  • GigE Vision: ‘GigE Vision’ is an interface standard from the Automated Imaging Association (AIA), for high-performance machine vision cameras. GigE (Gigabit Ethernet), on the other hand, is simply the network structure on which GiGE Vision is built. The GigE Vision standard includes both a hardware interface standard (Gigabit Ethernet), communications protocols, and standardized camera control registers. The camera control registers are based on a command structure called GenICam. GenICam seeks to establish a common software interface so that third party software can communicate with cameras from various manufacturers without customization. GenICam is incorporated as part of the GigE Vision standard. GigE Vision is analogous to Firewire’s DCAM, or IIDC interface standard and has great value for reducing camera system integration costs and for improving ease of use.
  • Global Shutter: Generally speaking, when some one says “global shutter”, they really mean “snapshot shutter”. See “Snapshot Shutter” below. In actuality, a global shutter starts all a camera’s pixels imaging at the same time, but during readout mode, some pixels continue to image as others are read out. (see Rolling Shutter, Snapshot shutter). For machine vision applications, snapshot shutter is generally a ‘must have’.
  • Gray Scale: refers to a monochrome image with gradations of grey. An 8-bit camera, for example would represent images in 256 shades of gray. A 12-bit camera would represent images in 4096 shades of grey.
  • Histogram: A graphical representation of the pixel values in an image. Generally the left edge of the image represents black, or zero, and the right edge represents white, or 256/4096. The histogram curve represents how many pixels of each luminence value.
  • IIDC: IIDC (DCAM) is a software interface standard for communicating with cameras over Firewire. It is a standardized set of registers etc. If a camera is IIDC compliant then its control registers and data structures comply with the IIDC spec. Such a camera can be truly plug-and-play in a way which other cameras are not.
  • Image Analysis: The software process of generating a set of descriptors or features by which a computer may make a decision about objects in an image.
  • Integration: generally refers to the task of assembling the components of a machine vision system (camera, lens, lighting, software, etc). Usually used as short form for “System Integration”. When used in reference to what the camera does, it is another word for exposure time (see Integration Time).
  • Integration Time: Also referred to as exposure time. This is the length of time that the image sensor is exposed to light while capturing an image. This is equivalent to the exposure time of film in a photographic camera. The longer the exposure time, the more light will be acquired. Low light conditions require longer exposure times.
  • Interlaced Scan: Refers to one of two common methods for “painting” a video image on an electronic display screen (the second is progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all the odd lines in the image, the other contains all the even lines of the image.
  • Interline Transfer: A CCD architecture where there exists an opaque transfer channel between pixel columns. Such a CCD does not require a mechanical shutter but spatial resolution, dynamic range, and sensitivity are reduced due to the masked column between light sensitive columns.
  • IR Lens: A lens that is specially designed so that chromatic aberrations in the infrared wavelengths are corrected. An IR-lens should be used in cases where both visible and IR illumination is being received by the camera; otherwise the resulting image would be blurred.
  • ISO 9000, 9002: Internationally recognized standards that certify a company’s manufacturing record keeping. ISO accreditation does not imply any product quality endorsement, but it is rather an acknowledgement of the manufacturing and/or engineering record keeping practices of the accredited company.
  • Jumbo Frames: With respect to Gigabit Ethernet, Jumbo frames refers to the data packet size used for each Ethernet frame. Since each data frame must be handled by the operating system, it make sense to use large data frames to minimize the amount of overhead when receiving data into the host computer. Such large data blocks are called Jumbo frames.
  • Linescan (or Linear Array): A line scan, or linear array camera has a single row of pixels and captures an image by scanning an object that moves past the lens. Conceptually similar to a desktop scanner (compare “area scan”).
  • Machine Vision: Machine vision is the application of cameras and computers to cause some automated action based on images received by the camera(s) in a manufacturing process. Generally, the term “machine vision” applies specifically to manufacturing applications and has an automated aspect related to the vision sensors. However, it is common to use machine vision equipment and algorithm outside of the manufacturing realm.
  • Megapixel: Refers to one million pixels - relating to the spatial resolution of a camera. Any camera that is roughly 1000 x 1000 or higher resolution would be called a mega pixel camera.
  • Manual Focus: Refers to a lens which requires a human user to set the focus as opposed to an auto-focus lens which is controlled via a computer or camera.
  • Manual Iris: Refers to a lens which requires a human user to set the iris as opposed to an auto-iris lens which is controlled via a computer or camera.
  • Microlens: A type of technology used in some interline transfer CCD’s whereby each pixel is covered by a small lens which channels light directly into the sensitive portion of the CCD.
  • Morphology: The mathematics of shape analysis. An algebra who variables are shapes and whose operations transform those shapes.
  • Motorized Lens: A lens whereby zoom, aperture, and focus (or one or more of these) are operated electronically. Usually, a computer operated controller is used to drive such lenses. The controller often has an RS-232 port through which a camera, or computer, controls the lens.
  • Network Adaptor - another word for the Ethernet interface card or port used found on many computers..
  • OCR: stands for Optical Character Recognition and refers to the use of machine vision cameras and computers to read and analyze human-readable alphanumeric characters to recognize them.
  • OHCI: (Open Host Controller Interface) describes the standards created by software and hardware industry leaders–including Microsoft, Apple, Compaq, Intel, Sun Microsystems, National Semiconductor, and Texas Instruments–to assure that software (operating systems, drivers, applications) works properly with any compliant hardware.
  • Offset: This is the same as the brightness control on your TV. It is a positive DC offset of the image signal. It is used primarily to set the level of “black”. Generally speaking, for the best signal, the black level should be set so that it is near zero (but not below zero) on the histogram. Increasing the brightness beyond this point just lightens the image but without improving the image data.
  • Pixel: An abbreviated form of picture element. The individual elements that make up a digitized image array.
  • Progressive Scan: Also known as non-interlaced scanning, progressive scan is a method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame (each image now called a field) are drawn alternatively.
  • Readout: Readout refers to how data is transferred from the CCD or CMOS sensor to the host computer. Readout rate is an important specification for high-resolution digital cameras. Higher readout rates mean that more images can be captured in a given length of time.
  • Region of Interest: Region of interest readout (ROI) refers to a camera function whereby only a portion of the available pixels are read out from the camera. This is also referred to as “partial scan” or “area of interest” (AOI).
  • Rolling Shutter: Some CMOS sensors operate in “rolling shutter” mode only so that the rows start, and stop, exposing at different times. This type of shutter is not suitable for moving subjects except when using flash lighting because this time difference causes the image to smear. (see Global Shutter, Snapshot Shutter).
  • Sensitivity: A measure of how sensitive the camera sensor is to light input. Unfortunately there is no standardized method of describing sensitivity for digital CCD or CMOS cameras, so apples-to-apples comparisons are often difficult on the basis of this specification.
  • Sensor Size: The size of a camera sensor’s active area, typically specified in the horizontal dimension. This parameter is important in determining the proper lens magnification required to obtain a desired field of view. The primary magnification (PMAG) of the lens is defined as the ratio between the sensor size and the FOV. Although sensor size and field of view are fundamental parameters, it is important to realize that PMAG is not.
  • Smart Camera: Sometimes called “intelligent camera”, or “smart sensor”, the term smart camera refers to a camera with a built-in computer running image processing software in a single compact package capable of doing some simple machine vision tasks.
  • Snapshot shutter: Sometimes called a global shutter, snapshot shutter refers to an electronic shutter on CCD or CMOS sensors. A snapshot shutter is a feature of the image sensor that causes all of the pixels on the sensor to begin imaging simultaneously and to stop imaging simultaneously. This feature makes the camera especially suitable for capturing images of moving objects. (see Rolling Shutter, Global Shutter).
  • Spatial resolution: A measure of how well the CCD or camera can resolve small objects. Usually used relating not only to the pixel resolution, but also to lens resolution — ie the resolution of the whole optical system. See also High Resolution.
  • System Integrator: A company or person who provides turnkey vision systems using cameras, computers, software, and possibly robotics and other mechanical hardware usually aimed at a specific customer application and installation.
  • Sync: Refers to an external signal generated by a camera than can be used to synchronize the camera with outside events such as flash illumination, or other cameras.
  • Trigger: An input to an industrial digital camera than initiates the image capture sequence. Otherwise, an electrical signal or set of signals used to synchronize a camera, or cameras, to an external event.
  • Video-type auto iris: There are two major types of auto-iris lenses: DC-type, and video-type. The video-type auto-iris requires a video signal to determine how far to open the iris on the lens.
  • Working Distance (WD): The distance from the front of the lens to the object under inspection.

Friday, November 21, 2008

Integrating thermal imaging into surveillance systems

"Defense & Security

Integrating thermal imaging into surveillance systems

Glen Francisco

Thermal imaging camera systems can be merged with existing surveillance technologies and new image-processing algorithms to protect critical infrastructures more efficiently.

With the increased concern over terrorist threats at critical infrastructure sites, installing and operating comprehensive detection, management, and control systems has become imperative. This can be achieved by selecting appropriate components from the most powerful surveillance technologies available and using each to create highly-effective security systems. Thermal imaging, automated software detection, immersive visual assessment, and wide-area command and control are presently considered the four building blocks of a completely integrated security system.

It is now recognized that many US energy plants, commerce hubs, and other key sites –mostly monitored by closed-circuit television (CCTV)– currently need to improve their detection and monitoring capabilities. When compared to modern methods, most conventional CCTV systems have major shortcomings, including limited all-weather situational awareness, impaired nighttime detection, and lack of early detection functionalities. They are also subject to operator fatigue or other human errors.1 Integrating a system using the four building blocks offers the means" to significantly decrease these shortcomings.

The popularity of thermal imaging camera use at critical infrastructure sites is largely due to their good performance under nighttime or challenging weather conditions, their ability to see through foliage, and their covert surveillance and long-range detection capabilities. In remote locations, the lack of adequate lighting is always a concern and can breach security in shadowy corners, dense foliage, or other dark areas. Visible CCTV cameras and short-wave infrared cameras have difficulty detecting intruders in dark areas because of their dependence on a visible light source. In contrast, thermal imaging can detect radiation in the infrared range of the electromagnetic spectrum. Since infrared radiation is emitted by all objects based on their temperature, thermal imaging cameras can pick up on warm objects which stand out well against cooler backgrounds. Humans and other warm-blooded animals are then easily detected against the environment, day or night. In addition, the thermal waves used by such cameras make it easier to detect threatening activities under inclement weather conditions. Even at long distances, these cameras remain the most effective choice for surveying large areas.

The performance of a critical infrastructure security surveillance system can also be improved by combining thermal imaging cameras, working either as stand-alone monitors or as part of a network, with advanced image-processing algorithms that can improve reliability while increasing the degree of system automation and the level of situational awareness.

Modern surveillance systems should be able to detect events, evaluate the degree of threat, and archive or provide real-time reports to a command center. In addition, a 3D-immersive video surveillance system, with either single, multiple, or pan/tilt/zoom cameras, can further enhance situational awareness for security personnel. This can be achieved by creating a 3D visual context that seamlessly merges “live” video streams from security cameras with a 3D representation of the monitored facility, with further enhancement possible using algorithms that can increase awareness and reliability to higher levels.

While 3D-immersive video surveillance provides situational awareness, a wide-area remote surveillance system integrates sensors of any type over very large areas to ensure effective responses to security threats. The wide-area surveillance can detect and respond to a series of alarms, while one (or more) 3D-immersive system(s) can manage an individual event.

The level of threat to critical infrastructures across the world is high, and these facilities are expected to remain vulnerable in the coming years. The value of a modernized surveillance system that increases security and safety at these sites is accordingly enormous. This is why automated thermal detection and software systems are presently considered a highly valuable addition to traditional security systems for protecting the public and assets to the fullest degree and with the highest confidence






Flir's Thermal Imaging Cameras -



Thermal Insulation and Fireproofing Materials

Thermal insulation and fireproofing materials reduce the flow of heat through the thickness of the material. They are typically fiber-based or foam structures prepared from thermally-stable materials. There are four basic material types of thermal insulation and fireproofing materials: fiberglass, glass wood, polymeric materials, cellulose fibers, and ceramics or refractories. Fiberglass is offered typically as batting or as a high-loft, flexible structure; however, board-like products are also suitable for many industrial applications. Glass wool is spun from slag, rock, glasses or minerals that have been melted and produced as fibers. Polymeric materials have high molecular weight materials and are often hydrocarbon based. They can be prepared into films, fibers, fabrics or foams. Cellulosic fibers are prepared from wood pulp, cotton and other natural resources. Refractories are hard, heat-resistant thermal insulation and fireproofing materials such as alumina cement, fire clay, bricks, pre-cast shapes, cement or monolithics and ceramic kiln furniture. Ceramics and refractories have high melting points are suitable for applications requiring wear resistance, high temperature strength, electrical or thermal insulation or other specialized characteristics.

There are five main forms for thermal insulation and fireproofing materials: bulk chopped fibers, textiles or fibrous mats, foam, board and block insulation, and films or foils. Bulk chopped fibers provide loose, flowable insulation that can be filled or applied into an application, or serve as the basis for a textile or mat. Textiles and fibrous mats are made by weaving, knitting, braiding or web extrusion. These thermal insulation and fireproofing materials can also be needlepunched, formed from a slurry (wet laid), or produced in other ways. The properties of finished products depend upon the manufacturing process, fiber material and fiber size. Foam insulation materials are made from low-density elastomers, plastics, and other materials with various porosities. They are used in a variety of architectural, industrial, medical, and consumer applications. Board and block insulation is made from a variety of base materials in the form of a sheet, strip, plate or slap. Thermal insulation and fireproofing materials are also prepared from films, foils or composite structures with foam, fabrics or other materials.

Thermal insulation and fireproofing materials differ in terms of specifications and features. Use temperature and thermal conductivity are two important parameters to consider. Use temperature is the range through which a material can be exposed without degradation of its structural or other required end-use properties. Thermal conductivity is the linear heat transfer per unit area through a material for a given applied temperature gradient. In terms of features, some thermal insulation and fireproofing materials are flame-retardant, electrically insulating, and chemical or fuel resistant. Others are UL approved, a designation from Underwriters Laboratories (UL). Hydrophilic (absorbent) and hydrophobic (waterproof) thermal insulation and fireproofing materials are also commonly available.