Image sensors are a solid-state device and serve as one of the most important components inside a machine vision camera. Every year new varieties of sensors are manufactured with improvements to sensor size, resolution, speed, and light sensitivity.
In this article we discuss some of the basics of image sensor technology found inside machine vision cameras and how those relate to their classifications. Image Sensor Components. Image Silicon Wafers. Sensor Functions Inside a Camera. Mono and Color Sensors. Image Sensor Format Size. Pixel Size. Spectral Response.
Below is a typical CMOS image sensor. The sensor chip is held in a package with protective glass. The package has contact pads which connect the sensor to the PCB. Different sensors come in different packages. For example, the above photo is a sensor with a ceramic PGA package. The solid-state image sensor chip contains pixels which are made up of light sensitive elements, micro lenses, and micro electrical components.
The chips are manufactured by semiconductor companies and cut from wafers. The wire bonds transfer the signal from the die to the contact pads at the back of the sensor. The packaging protects the sensor chip and wire bonds from physical and environmental harm, provides thermal dissipation, and includes interconnecting electronics for signal transfer. A transparent window in the front of the packaging called a cover glass protects the sensor chip and wires while allowing light to reach the light sensitive area.
Sensor dies are produced in large batches on silicon wafers. The wafers are cut into many pieces with each piece housing a single sensor die. The larger the sensor die size, the lower number of sensors per wafer.
This typically leads to higher costs. CMOS is a complementary metal-oxide-semiconductor, which is mainly a semiconductor made of two elements of silicon and germanium and realizes basic functions through negatively and positively charged transistors on CMOS.
The current generated by these two complementary effects can be recorded and interpreted as an image by the processing chip. CMOS image sensors are usually composed of image sensor cell array, row driver, column driver, timing control logic, AD converter, data bus output interface, control interface, etc.
These parts are usually integrated on the same silicon chip. The working process can be divided into reset, photoelectric conversion, integration, and readout.
Composition of CMOS sensor. The difference is that the information transmission method of the two sensors after photoelectric conversion is different. The MOS transistor and photodiode constitute a structural section equivalent to one pixel. MOS tube pixel structure. When the integration period ends, the scan pulse is applied to the gate of the MOS transistor to turn it on. The photodiode resets the reference potential and the video current flow on the load.
MOS transistor source PN junction functions as photoelectric conversion and carrier storage. When a pulse signal is applied to the gate, the video signal is read out. Here is the beginning of the CMOS image sensor sensing light.
The CMOS image sensor element array structure is composed of a horizontal shift register, a vertical shift register, and a CMOS image element sensitive element array. CMOS image sensitive element array structure. As mentioned above, each MOS transistor functions as a switch under the pulse drive of the horizontal and vertical scanning circuits.
The horizontal shift register sequentially turns on the MOS transistors that perform the horizontal scanning function from left to right, that is, the function of addressing columns, and the vertical shift register sequentially address each row of the array. Each pixel is composed of a photodiode and a MOS transistor that acts as a vertical switch. The horizontal switch is sequentially turned on under the pulse generated by the horizontal shift register.
And the vertical switch is turned on under the pulse generated by the vertical shift register. So we can apply the reference voltage bias to the photodiode of the pixel in sequence. CMOS image sensor array working diagram. The illuminated diode generates carriers to discharge the junction capacitance, which is the accumulation process of the signal during the integration. The above-mentioned process of turning on the bias voltage is also a signal reading process.
The size of the video signal formed on the load is proportional to the intensity of the light on the pixel. According to the functional block diagram of the CMOS image sensor, we can find that the workflow of the CMOS image sensor is mainly divided into the following three steps. Functional block diagram of CMOS image sensor.
Step 1: The external light illuminates the pixel array and a photoelectric effect occurs. A corresponding charge is generated in the pixel unit. The scene is focused on the image sensor array through the imaging lens, and the image sensor array is a two-dimensional pixel array. Each pixel includes a photodiode. The photodiode in each pixel converts the light intensity of the array surface into an electrical signal. Step 2: Select the pixel you want to operate through the row selection circuit and column selection circuit, and read out the electrical signal on the pixel.
In the selection process, the row selection logic unit can scan the pixel array row by row or interlaced, the same applies to the columns. The row selection logic unit and the column selection logic unit are used together to realize the window extraction function of the image.
Step 3: Output the corresponding pixel unit after signal processing. Among them, the main function of the analog signal processing unit is to amplify the signal and improve the signal-to-noise ratio. The pixel electrical signal is amplified and sent to the correlated double sampling CDS circuit for processing.
Correlated double sampling is an important method used by high-quality devices to eliminate some interference. The basic principle is that the image sensor leads to two outputs, one is a real-time signal, and the other is a reference signal. The difference between the two signals is used to remove the same or related interference signals. Also, it can complete signal integration, amplification, sampling, hold, and other functions. Besides, to obtain a qualified practical camera, the chip must contain various control circuits, such as exposure time control, automatic gain control, etc.
To make each part of the circuit in the chip operate following the prescribed rhythm, multiple timing control signals must be used. To facilitate the application of the camera, the chip is also required to output some timing signals, such as synchronization signals, line start signals, and field start signals. Among analog cameras and standard-definition network cameras, CCDs are the most widely used and have dominated the market for a long time. CCD is characterized by high sensitivity, but low response speed, which is not suitable for the high-resolution progressive scanning method used by high-definition surveillance cameras.
The buckets in the top image cannot measure the colour of the light, only the intensity. By placing a different coloured primary filter over each bucket, only light of that colour is captured.
Each line of pixels has only two of the three primary colours, either red and green or blue and green. There is space between each light sensitive bucket on the sensor.
This is where some of the on-chip electronics are located. Any light falling on this area would be wasted as it could not be recorded, but microlenses placed above the filter help direct light into one or other of the adjacent pixels.
A microlens This tiny lens sits above the Bayer filter and helps each pixel capture as much light as possible. The pixels do not sit precisely next to each other-there is a tiny gap between them. Any light that falls into this gap is wasted light, and will not be used for the exposure. The microlens aims to eliminate this light waste by directing the light that falls between two pixels into one or other of them.
Full colour If you've read everything so far very carefully, and had a good look at the picture of a Bayer pattern filter, you may have noticed that there are twice as many green squares as there are red or blue. This is because the human eye is much more sensitive to green light than either red or blue, and has a much greater resolving power in that range. Similarly, you may also have wondered how the full colour image is created, if each pixel can only record a single colour of light.
Surely, each pixel is missing two thirds of the colour data needed to make a full colour image? Indeed it is, but due to some very clever algorithms within the camera, it succeeds in working out the full-colour for each pixel. The method used is called 'demosaicing', and is very complex. However, in simple terms, the camera treats each 2x2 set of pixels as a single unit. This provides one red, one blue and two green pixels, and the camera can then estimate the actual colour based on the photon levels in each of these four wells.
Look at the diagram above. We would love to publish an article by you if you are interested in writing for us. See what we are looking for and get in touch.
Like This Article? Don't Miss The Next One! Join over , photographers of all experience levels who receive our free photography tips and articles to stay current: Email Subscribe. Related Articles. Alex Bailey says:.
Thursday, February 19th, at am. Robert says:.
0コメント