|
1.IntroductionSpectral imaging can be used to collect spectral information from a sample, and different imaging systems and configurations are available for variety of applications and fields, which include remote sensing,1,2 quality assessment of agrofood products,3,4 and biomedical and forensic imaging.5–9 Normal, multispectral, hyperspectral, and ultraspectral imaging are some of the common terms used in spectral imaging. The main differences between them in the above mentioned order are with respect to a greater number of wavelength bands and higher precision. Two classification criteria are presented in Table 1.1,10 Both criteria use the number of wavelength bands as one of the defining parameters. However, Fresse et al.10 used precision, whereas Puschell1 used resolution. By using the number of wavelength bands, a common parameter between these two definitions, the latter makes it easier for one to classify a system as a hyperspectral (HS) or ultraspectral imager. The first definition which is stricter is used to define the type of spectral imaging that is employed in the pushbroom imager presented in this paper. Table 1Classification of spectral imaging.
Hyperspectral imaging (HSI) and ultraspectral imaging give more detailed spectral signatures for identification purposes with higher accuracy by enabling the detection of more wavelength bands along with an increase in spectral range, precision, and resolution. Similar to other conventional spectral imaging, HSI requires a component such as a spectrograph to spectrally split or filter the incoming light from the sample. Thereafter, the light needs to be captured by a sensor. There are four main types of imager, namely spatial scanning, spectral scanning, spatiospectral scanning, and snapshot imagers. Each imager has its own advantages and constraints and can be used for specific objective or applications. They can be used to achieve the same type of information, known as a datacube. A datacube stores the intensity value in three dimensions (spatial-spatial-spectral). The value in each voxel [similar to pixels in a two-dimensional (2-D) dataset] of a datacube indicates the intensity of a specific wavelength from one spatial point in a 2-D sample.11 A spatial-scanning imager usually uses a dispersive element such as a prism-grating-prism assembly in a spectrograph12 to separate the incoming spectrum so that information of the constituent wavelength bands can be detected by the detector array. The conventional point-scanning spectrometer records the spectrum of a spatial point in each scan to give one-dimensional (1-D) spectral information. By repeating the scan across multiple points in a 2-D area (whiskbroom imager), a datacube can be formed. Spatial scanning can be done using a 2-D stage to move the sample or using a microelectromechanical system scanner to direct the point illumination to different parts of the sample.13 With a large sample, the data acquisition time can be long as scanning needs to be repeated for each point in the sample. A faster alternative in this category is to use a line-scanning or pushbroom imager.14,15 In each scan, a line-scanning imager is able to capture the individual spectrum of every point across a line of the sample. The light from this line of the sample is dispersed into different wavelengths onto the 2-D detector. Thus, the image on the detector has one spatial and one spectral dimension. The scan is repeated by relative motion perpendicular to the detector’s line of view (LOV) of the sample. Line scanning is repeated in one dimension by going to a subsequent row, compared to a 2-D point scanning. This reduces the acquisition time in line scanning by a factor equal to the number of rows, which can be significant. The development of acousto-optic tunable filters16,17 and liquid crystal tunable filters18,19 makes a spectral-scanning imager possible. These filters can be electronically tuned to selectively choose a wavelength band to be transmitted and be imaged by the detector. Each scan provides intensity data of the sample (2-D spatial-spatial) for a wavelength band and is repeated by tuning the filter so that the next wavelength band (1-D spectral) can be recorded. In a spatiospectral-scanning imager, each scan provides intensity data of a diagonal slice of the datacube. This differs from a spatial- and spectral-scanning imager, which performs scanning in the orthogonal direction of the datacube. Relative motion between the detector and the sample is required to give a sequence of diagonal slices to build the datacube.20 A snapshot imager acquires all the information required to build a datacube in one scan. Thus, no temporal scanning and relative motion between the sample and imager is required.21 The main advantage of a pushbroom imager over a spectral-scanning or snapshot imager is that it offers a much higher spectral resolution across a broad spectral range. However, its main drawback is the need to have relative motion between the sample and the detector, which can limit its data acquisition rate. A few spatial-scanning HS imagers have been reported in literature for biomedical-related applications. Some of these imagers do not use a video camera (VC) in the system,14,22 whereas others incorporate a VC in the setup for direct video imaging,23,24 which has many benefits. Using a VC for direct video imaging gives a better visual representation with a color image, which can be used to verify the data after measurement. The VC and detector camera (DC) can be positioned such that both the cameras capture a focused image simultaneously. Using the VC, samples of different thicknesses can be easily positioned to maintain the same working distance. The VC allows the sample to be positioned precisely and this is especially important for a system with small field-of-view (FOV). Unwanted and repeated scanning can be prevented and this saves time and minimizes deterioration of the sample. However, having direct video imaging capability alone does not allow the user to pinpoint exactly which area in the FOV of the VC as the region of interest (ROI). In this context, this paper reports the instrumentation, calibration, and the theoretical framework that used to set up a pushbroom HS imager incorporating a VC for both direct video imaging and user-selectable ROI. The advantages of using such a configuration include the benefits for direct video imaging mentioned earlier. The function introduced for user-selectable ROI allows the storing of the information from only within the ROI, minimizing measurement time, data size, and computational time. This precise mining of information from only within the ROI is accomplished by mechanical and digital means. While the top-to-bottom scanning of the ROI (height) is done by an automated scanning stage, the mining of data from only the spectral range of interest and within the width of the ROI is done by digital means. 2.Instrumentation of the Pushbroom HS ImagerThe proposed pushbroom HS imager’s design and configuration is shown in Fig. 1. The imager consists of a three-axis motorized stage (Physik Instrumente, and -axes: M-112.2DG, -axis: M-110.1DG) to position the sample. The -axis stage is used to move the sample between each scan. Light from the sample then passes through the forelens (Navitar 2-50145 doublet lens) and is placed in a fine focus adapter (Navitar 2-16265). This adapter is attached to the bottom side of the quadrocular adapter (Nikon Y-QT), which houses a sliding mirror. The sliding mirror is initially pushed into the quadrocular adapter and directs light toward the VC (Path 1 in Fig. 1) before scanning commences. The VC (iDS UI-1550LE-C-HQ) has light sensitive pixels, allowing direct video imaging of the sample. The software developed allows the user to choose a particular region within this FOV as the ROI. After selection of the ROI, the sliding mirror is pulled out of the quadrocular adapter and light travels straight toward the spectrograph and the DC (Path 2 in Fig. 1). Scanning can now begin after the sliding mirror is pulled out. The spectrograph (Specim ImSpectors V10E, 400 to 1000 nm, 2.8-nm spectral resolution) is used for the dispersion of light and the DC (Andor EMCCD LucaEM DL-604M-OEM, 400-1000 nm), with light sensitive pixels, is used to record the spectral information. 3.Instrumentation of the Pushbroom HS ImagerThe calibration can be divided into three main parts (FOV, spectral, and position). 3.1.FOV Calibration(mm) refers to the length of the FOV of the VC in the vertical direction. At the minimum and maximum zooms (adjusted using the fine focus adapter), is measured to be 5.17 and 4.32 mm, respectively. This is done by first placing a sample onto the stage. The stage is displaced by a distance to move the sample’s reference point from the top to the bottom of the FOV of the VC. This stage displacement is . The results presented in the following sections of this paper are all at maximum zoom where was 4.32 mm. 3.2.Spectral CalibrationThe spectrum from each sample point along the DC’s LOV is dispersed by the spectrograph. Each spectrum spreads along the -axis of the DC. This calibration assigns each row of DC () to a specific wavelength band. Calibration is carried out by imaging a flat sample illuminated by 12 calibration wavelengths () (470 nm and 500 to 1000 nm with 50-nm incremental steps) from a tunable laser source (NKT Photonics SuperK Extreme EXR-15, SuperK Select 4xVIS/IR, SuperK Select-/nIR1). The second-order polynomial model used to relate each to its calibration wavelength is shown in Eq. (1), where , , and are constants. Subsequently, a second-order polynomial regression model is used to determine the values of , , and , which are found to be E-5 nm, , and . With these constants, each will later be assigned a wavelength. 3.3.Position CalibrationThe VC and DC have different views of the sample. The VC has a rectangular view of the sample, whereas the DC has an LOV across the sample. The length of the DC’s LOV is also longer than the width of the VC’s FOV (Fig. 2). Thus, position calibration between these two cameras is required. 3.3.1.andThis calibration is done as the width across the sample viewed by VC is shorter than DC. and refer to the DC column indices () corresponding to the extreme left and right views of the VC, respectively. The sample used is a United States Air Force (USAF) chart, placed such that the left edge of a dark square is along the extreme left view of the VC. By looking at the DC image, the dark square is easily identified. The which corresponds to the left side of the dark portion is . This process is shown in Fig. 3. is found to be 224, which means that the left most view of the VC is imaged onto the 224th DC column index. is obtained using similar procedure and is found to be 777. 3.3.2.This calibration is done to determine the VC row index (), which shares the same view as the DC’s LOV. can be found by first looking at the DC view and then slowly changing the sample’s position until a change on the DC view is observed. This happens when the sample enters the DC’s LOV. is found to be 542. The DC has an LOV across the sample and this LOV corresponds to the 542th row from the top of the VC’s light sensitive pixel array. 4.User Defined ParametersThese parameters give the user flexibility in using the system so that it can be faster and give only the required data for later analysis. 4.1.Region of InterestThe user-selectable ROI determines the sample region within the VC’s FOV from which the data are collected and stored. Selection is done by simply dragging a rectangular section across the VC’s FOV. The ROI is described by four parameters; “top, bottom, left, and right.” “Top and bottom” refer to the , which correspond to the top and bottom of the ROI, respectively. “Left and right” refer to the VC column index (), which corresponds to the extreme left and right views of the ROI, respectively. A shorter ROI (vertical direction) can result in fewer scans, thus reducing data acquisition time and data size. A narrower ROI (horizontal direction) will not reduce the data acquisition time but does reduce data size. 4.2.Spectral RangeThe spectral range of both the DC and spectrograph is 400 to 1000 nm; therefore, the maximum spectral range of the integrated system is also the same. The user selected spectral range is defined using (nm) and (nm), which depends on the illumination source and the spectral range of interest. Spectral information beyond this range will not be recorded. A smaller spectral range results in a smaller data size but will not affect the acquisition time. 4.3.Stage Step SizeThe pushbroom HS imager sequentially scans the ROI from top to bottom. The distance that the -axis stage moves in each step between subsequent scans is defined by “Step.” For example, when Step is set to 5, it means the -axis stage will move by a distance covered by five rows of the VC pixel. A bigger Step results in a shorter acquisition time but can give a poorer -axis spatial resolution. Thus, Step has to be adjusted to find a balance between data acquisition time and -axis spatial resolution. 4.4.DC SettingThe exposure time and electron-multiplying (EM) gain of DC can be adjusted depending on the illumination condition. A high EM gain is used in low-intensity illumination conditions to increase DC sensitivity. An EM gain value that is too high can, however, lead to DC pixel saturation. Both the EM gain and exposure time have to be optimized to reduce exposure time while still getting high quality DC images. This will reduce the overall data acquisition time. 5.Return Values/VectorsAll the steps and procedures mentioned in Secs. 3 and 4 are used to produce four return values and two vectors (to be described in detail in this section). They communicate with the DC and -axis stage to collect data only from the user defined parameters. 5.1.andand refer to the which correspond to the left and right of the ROI, respectively. Each scan records data from the DC between and only. and are akin to different scales while referring to the same object (Fig. 4). Linear interpolation is used to determine the value of and in Eqs. (2) and (3), respectively. The and mentioned in Sec. 3.3.1 are used here. The DC does not recognize . It requires the starting column index and the length of column , which is calculated using Eq. (4). where rd means round off to nearest integer.5.2.WL VectorWL is the wavelength assigned to each . WL is calculated using Eq. (5). The constants , , and obtained in Sec. 3.2 for spectral calibration are used here. 5.3.andand refer to the which correspond to the and of the selected spectral range, respectively. In each scan, only data between the DC row index and will be recorded. The constants , , and from the spectral calibration in Sec. 3.2 are used here. and can be determined using the real solution of a quadratic equation. Equation (6) is formed by rearranging Eq. (1) for . The real solution to Eq. (6) is used to calculate using Eq. (7). Similarly, is calculated using Eq. (8). The DC does not recognize . It requires the starting row index and the length of row , which is calculated using Eq. (9). Due to the spectral range of 400 to 1000 nm of both the spectrograph and DC, the maximum spectral range of this system is also 400 to 1000 nm. The maximum is calculated to be 756. This means that the pushbroom HSI system detects 756 wavelength bands across a 400 to 1000 nm spectral range. Using the chosen definition of spectral imaging from Fresse et al.,10 this system is classified as an HS imager. The average spectral distance between adjacent bands is about 0.795 nm, but the spectrograph has a spectral resolution of 2.8 nm. Therefore, the system’s overall spectral resolution is 2.8 nm. 5.4.Stage Position VectorThis vector represents the position of the -axis stage that it needs to be throughout the entire scan so that only the ROI is scanned from its top and bottom at a stage step specified by the user. The vector is counted from the home position of the -axis stage. and from Secs. 3.1 and 3.3.2, respectively, are needed. The relationship between the count and displacement of the -axis stage (CD) was determined to be about using the -axis stage’s specifications. Prior to the first scan, the -axis stage shifts the sample until the top of the ROI is in line with the DC LOV. This displacement in millimeters is calculated using top, , and . This is later converted to displacement in counts of the -axis stage using CD. By adding this to the current -axis stage position in counts (), the position of the -axis stage in counts for the first scan () can be calculated using Eq. (10). Similarly, the position of the final scan () can be calculated from Eq. (11). The -axis stage is closer to its home position during the first scan compared to the last scan. Thus, is smaller than . The step in counts of the -axis stage () is calculated with respect to the user defined Step, , and CD using Eq. (12). 5.5.Significance of Return Valuesand are related to the location of the ROI in the -direction. and refer to the user defined spectral range. These four values together form a corresponding region on the DC pixel array from which data are recorded in each scan. Each scan produces an array of data in the spatial-spectral domain, and the stage moves on to the next position. This process is repeated until scanning takes place at all the positions indicated by the stage position vector. 6.HyperSpecA software based on LabView®, called HyperSpec, has been developed in-house. The control panel is shown in Fig. 5. It is used for the software interfacing of the VC, DC, and three-axis stage and incorporates all the points discussed in Secs. 3, 4, and 5. After calibration and entering the user-defined parameters, the scanning can begin. The return values are determined automatically, and the repeating process of stage movement and then DC data recording will also run on its own. After all the scanning has been completed, the stage places the sample back to the original position it was at just before scanning started. 7.Data Rearrangement and RepresentationThe files saved are imported and processed by an in-house written script in MATLAB®. The script arranges the 2-D data to a single three-dimensional datacube. As data representation is more flexible and can vary depending on the needs, more parameters can be altered and customized. Many types of plots can be made available, such as spectrum plot, images at different wavelength bands and a datacube. 8.Measurement and ResultsThe following measurements in this section were taken at maximum zoom where the full FOV of the VC is about with a working distance of about 21.5 cm. 8.1.VC for Selectable ROIA USAF resolution chart is used in this section. A fiber-optic pigtailed source (Edmund Optics MI-150) was used for illumination. The full VC FOV before measurement and the selected ROI (Group 3 of the USAF chart) indicated by a black rectangle can be seen in Fig. 5. This section first shows the different plots of the results that can be acquired from each set of data (Figs. 6Fig. 7–8). This section also compares the image of the ROI selected to the spatial image captured by the system at a particular wavelength to validate whether or not the system is working well and capturing data only from the ROI. Figure 9 is made up of an image of the ROI, with two other identical images from the data at 650 nm placed beside and below it. The four dashed lines in this figure match features in the ROI to the same feature in the data. Therefore, it is observed that the system performed scanning only across the selected ROI, and only data in the ROI are saved. This validates the steps and formulas mentioned in Secs. 3 to 7. The longer vertical dotted line also shows that the ROI and data have the same orientation. Therefore, the DC, VC, and -axis stage are all aligned with respect to each other. The VC is successfully implemented into the pushbroom HS imager for a user-selectable ROI to minimize data acquisition time and data size. 8.2.Determining Lateral Resolution Using USAF ChartThis section uses the same set of data as in Sec. 8.1. From Fig. 10, the horizontal and vertical lines of Group 3 Element 5 can still be distinguished. Thus, the horizontal and vertical lateral resolution of the system at 650 nm in the basic configuration without any image enhancement is determined using Group 3 Element 5 of the USAF chart and is calculated to be . 8.3.Sample Analysis Demonstration Using Chicken Breast in Reflection ModeChicken breast tissue devoid of fat and skin is used, with a visible blood clot on the surface. This part of the chicken breast was chosen so that the blood clot can provide a contrast in the image. The sample on the glass slide and the ROI are shown in Figs. 11(a) and 11(b), respectively. The same illumination source as Sec. 8.1 was used (Edmund Optics MI-150) for conducting this study. Figure 12 shows the intensity mapping at four different wavelengths. The regions where 400 spectra were extracted and processed to represent the spectra of the blood clot and the chicken breast tissue are marked by the small white and black rectangles, respectively, in Figs. 11(b) and 12. Figure 13 shows the processed spectra of the chicken breast tissue and the blood clot and are found to be easily distinguishable from each other. These results indicate that such spectral data can be used as a data library to compare and identify unknown samples in the future. 8.4.Fluorescence ImagingA Rhodamine 6G fluorescent film, which has been placed on a tissue phantom (Simulab Corporation), and the ROI are shown in Figs. 14(a) and 14(b), respectively. An excitation wavelength of 500 nm (NKT Photonics SuperK Extreme EXR-15, SuperK Select 4xVIS/IR) was used together with a beam expander unit so that the expanded beam covered the entire FOV of the VC. The measurement was taken with an exposure time of 150 ms and an EM gain of 10. The entire spectral range from 400 to 1000 nm was recorded, though Fig. 16 shows only a shorter spectral range for a better representation. The intensity mapping of 535, 563, and 585 nm is shown in Fig. 15 to illustrate the differences in fluorescence intensity at varying wavelengths. The fluorescence spectrum is calculated from the area within the black boxes in Figs. 14(b) and 15. Figure 16 shows the processed excitation and fluorescence spectra, each normalized with respect to itself. The orange solid line shows the fluorescence spectrum which is an average of 400 spectra from the region indicated by the black rectangle in Figs. 14(b) and 15. The green dotted line is the excitation spectrum measured separately from a piece of white paper. The HSI of fluorescing samples is able to capture multiple fluorescent images at different wavelength bands. In this study, about 250 fluorescent images were captured between 500 to 700 nm (three of them shown in Fig. 15). Compared to the use of conventional imaging setup which uses a fluorescence filter to capture all the emission wavelengths in a single image, HSI provides much more information that can be used for a more accurate disease diagnosis. This can prove useful in disease diagnosis of the colon where the intensity and distribution of endogenous fluorophores are indicators of disease progression.25 9.ConclusionA pushbroom HS imager which incorporates a VC not only for direct video imaging (benefits mentioned in Sec. 1) but also for a user-selectable ROI within the full imaging FOV of the VC is proposed and demonstrated in this paper. These concepts bring several benefits especially to a pushbroom HS imager. After selecting the ROI, scanning takes place only within the ROI. There is no unwanted scanning, thus minimizing data acquisition time as well as the data size. A smaller data size in turn translates to a shorter computational time in data processing and analysis. Similar applications can also be applied to spectral-scanning and snapshot imagers. However, it will not result in a shorter data acquisition time in spectral-scanning (number of scans depends on number of spectral band, not size of ROI) or a snapshot imager (only one scan required.). The use of VC for a user-selectable ROI presented in this paper tries to negate the pushbroom HS imager as being a relatively slower HS imager. In the current configuration, the VC has an adjustable full imaging FOV using the fine focus adapter. The minimum and maximum full imaging FOV is about (working distance of about 21.5 cm) and (working distance of about 23.8 cm), respectively. The full FOV is also the maximum size of the ROI that can be selected by the user. The system has a maximum spectral range covering the visible to near-infrared light from 400 to 1000 nm and can detect 756 spectral bands within this spectral range. By using a DC and spectrograph suitable for imaging wavelengths more than 1000 nm, it is possible to extend the spectral range further into the infrared wavelengths. The horizontal and vertical lateral resolution of this system at maximum zoom without the use of any image enhancement is about . Such a lateral resolution makes the system suitable for use in biomedical imaging on tissue. In reflection mode imaging, a common and relatively cheap quartz halogen white light source (Edmund Optics MI-150) was used. With respect to the maximum spectral range of interest (400 to 1000 nm), the bulb used in this light source has a poor transmittance from 400 to 500 nm and 800 to 1000 nm. Also within the same spectral region, the DC has a lower quantum efficiency. This can be seen in the spectral plot from the reflection mode (Fig. 13) where intensity counts below 450 nm and above 900 nm are always much lower compared to the central wavelengths. Without changing the DC, this issue can be resolved using a light source with a higher intensity in the extreme ends of the maximum spectral range of interest. The experiments with the bio and fluorescent phantom samples shown here also demonstrate that this developed pushbroom HS imager that can be used as both reflection and fluorescence imaging modalities. The lateral resolution can be varied and improved using additional optical elements and by optodigital schemes. The use of this system can also be extended to other applications such as cellular scale biomedical imaging. In addition, by integrating the proposed configuration with a flexible probe scheme, it is expected to find potential endoscopic imaging applications as well. AcknowledgmentsThe authors acknowledge the financial support received through COLE-EDB and PhotoniTech-NTU RCA. ReferencesJ. J. Puschell,
“Hyperspectral imagers for current and future missions,”
Proc. SPIE, 4041 121
–132
(2000). http://dx.doi.org/10.1117/12.390476 PSISDG 0277-786X Google Scholar
J. Nieke et al.,
“Imaging spaceborne and airborne sensor systems in the beginning of the next century,”
Proc. SPIE, 3221 581
–592
(1997). http://dx.doi.org/10.1117/12.298124 PSISDG 0277-786X Google Scholar
D. Lorente et al.,
“Recent advances and applications of hyperspectral imaging for fruit and vegetable quality assessment,”
Food Bioprocess. Tech., 5
(4), 1121
–1142
(2012). http://dx.doi.org/10.1007/s11947-011-0725-1 FBTOAV 1935-5130 Google Scholar
R. Cubeddu et al.,
“Nondestructive quantification of chemical and physical properties of fruits by time-resolved reflectance spectroscopy in the wavelength range 650–1000 nm,”
Appl. Opt., 40
(4), 538
–543
(2001). http://dx.doi.org/10.1364/AO.40.000538 APOPAI 0003-6935 Google Scholar
V. K. Shinoj and V. M. Murukeshan,
“Hollow-core photonic crystal fiber based multifunctional optical system for trapping, position sensing, and detection of fluorescent particles,”
Opt. Lett., 37
(10), 1607
–1609
(2012). http://dx.doi.org/10.1364/OL.37.001607 OPLEDP 0146-9592 Google Scholar
V. M. Murukeshan and N. U. Sujatha,
“Integrated simultaneous dual-modality imaging endospeckle fluoroscope system for early colon cancer diagnosis,”
Opt. Eng., 44
(11), 110501
(2005). http://dx.doi.org/10.1117/1.2117487 OPEGAR 0091-3286 Google Scholar
H. Cen and R. Lu,
“Optimization of the hyperspectral imaging-based spatially-resolved system for measuring the optical properties of biological materials,”
Opt. Express, 18
(16), 17412
–17432
(2010). http://dx.doi.org/10.1364/OE.18.017412 OPEXFF 1094-4087 Google Scholar
L. Seah et al.,
“Fluorescence optimisation and lifetime studies of fingerprints treated with magnetic powders,”
Forensic Sci. Int., 152
(2), 249
–257
(2005). http://dx.doi.org/10.1016/j.forsciint.2004.09.121 FSINDR 0379-0738 Google Scholar
L. Seah et al.,
“Time-resolved imaging of latent fingerprints with nanosecond resolution,”
Opt. Laser Technol., 36
(5), 371
–376
(2004). http://dx.doi.org/10.1016/j.optlastec.2003.10.006 OLTCAS 0030-3992 Google Scholar
V. Fresse, D. Houzet and C. Gravier,
“GPU architecture evaluation for multispectral and hyperspectral image analysis,”
in Proc. IEEE Conf. Design and Architectures for Signal and Image Processing,
121
–127
(2010). https://doi.org/10.1109/DASIP.2010.5706255 Google Scholar
N. Gat,
“Imaging spectroscopy using tunable filters: a review,”
Proc. SPIE, 4056 50
–64
(2000). http://dx.doi.org/10.1117/12.381686 PSISDG 0277-786X Google Scholar
M. Kosec et al.,
“Characterization of a spectrograph based hyperspectral imaging system,”
Opt. Express, 21
(10), 12085
–12099
(2013). http://dx.doi.org/10.1364/OE.21.012085 OPEXFF 1094-4087 Google Scholar
Y. Wang et al.,
“MEMS scanner enabled real-time depth sensitive hyperspectral imaging of biological tissue,”
Opt. Express, 18
(23), 24101
–24108
(2010). http://dx.doi.org/10.1364/OE.18.024101 OPEXFF 1094-4087 Google Scholar
Z. Liu et al.,
“Parallel scan hyperspectral fluorescence imaging system and biomedical application for microarrays,”
J. Phys., 277 012023
(2011). http://dx.doi.org/10.1088/1742-6596/277/1/012023 2165-5286 Google Scholar
R. A. Schultz et al.,
“Hyperspectral imaging: a novel approach for microscopic analysis,”
Cytometry, 43
(4), 239
–247
(2001). http://dx.doi.org/10.1002/(ISSN)1097-0320 CYTODQ 0196-4763 Google Scholar
R. Leitner, T. Arnold and M. De Biasio,
“High-sensitivity hyperspectral imager for biomedical video diagnostic applications,”
Proc. SPIE, 7674 76740E
(2010). http://dx.doi.org/10.1117/12.849442 PSISDG 0277-786X Google Scholar
Y. Guan et al.,
“New-styled system based on hyperspectral imaging,”
in Proc. IEEE Conf. Photonics and Optoelectronics,
1
–3
(2011). https://doi.org/10.1109/SOPO.2011.5780492 Google Scholar
M. E. Martin et al.,
“Development of an advanced hyperspectral imaging (HSI) system with applications for cancer detection,”
Ann. Biomed. Eng., 34
(6), 1061
–1068
(2006). http://dx.doi.org/10.1007/s10439-006-9121-9 ABMECF 0090-6964 Google Scholar
B. S. Sorg et al.,
“Hyperspectral imaging of hemoglobin saturation in tumor microvasculature and tumor hypoxia development,”
J. Biomed. Opt., 10
(4), 044004
(2005). http://dx.doi.org/10.1117/1.2003369 JBOPFO 1083-3668 Google Scholar
S. Grusche,
“Basic slit spectroscope reveals three-dimensional scenes through diagonal slices of hyperspectral cubes,”
Appl. Opt., 53
(20), 4594
–4603
(2014). http://dx.doi.org/10.1364/AO.53.004594 APOPAI 0003-6935 Google Scholar
R. T. Kester et al.,
“Real-time hyperspectral endoscope for early cancer diagnostics,”
Proc. SPIE, 7555 75550A
(2010). http://dx.doi.org/10.1117/12.842726 PSISDG 0277-786X Google Scholar
Z. Liu et al.,
“Line-monitoring, hyperspectral fluorescence setup for simultaneous multi-analyte biosensing,”
Sensors, 11
(11), 10038
–10047
(2011). http://dx.doi.org/10.3390/s111110038 SNSRES 0746-9462 Google Scholar
M. B. Sinclair et al.,
“Design, construction, characterization, and application of a hyperspectral microarray scanner,”
Appl. Opt., 43
(10), 2079
–2088
(2004). http://dx.doi.org/10.1364/AO.43.002079 APOPAI 0003-6935 Google Scholar
M. B. Sinclair et al.,
“Hyperspectral confocal microscope,”
Appl. Opt., 45
(24), 6283
–6291
(2006). http://dx.doi.org/10.1364/AO.45.006283 APOPAI 0003-6935 Google Scholar
N. Uedo et al.,
“Diagnosis of colonic adenomas by new autofluorescence imaging system: A pilot study,”
Digest. Endosc., 19 S134
–S138
(2007). http://dx.doi.org/10.1111/den.2007.19.issue-s1 0915-5635 Google Scholar
BiographyHoong-Ta Lim received his bachelor’s degree in engineering from NTU in 2012 and is currently pursuing his PhD at the Centre for Optical and Laser Engineering (COLE), School of Mechanical and Aerospace Engineering (MAE), NTU. His main research interests are in the area of multi- and hybrid-modality imaging for biomedical applications. Vadakke Matham Murukeshan is an associate professor with the School of MAE and deputy director of COLE, NTU. His main research interests are biomedical optics, nanoscale optics, and applied optics for metrology. He has published over 250 research articles in leading journals and conference proceedings and has 6 patents and 8 innovations disclosures. He is a Fellow of the Institute of Physics and is a member of SPIE. |