I. INTRODUCTION
Section:
ChooseTop of pageABSTRACTI. INTRODUCTION <<II. MATERIALS AND METHODSIII. RESULTS AND DISCUSSI...IV. CONCLUSION AND FUTURE...SUPPLEMENTARY MATERIALREFERENCESIn this work, we present a novel wide-field, multispectral, time-resolved fluorescence microscope based on computational imaging techniques. This scheme provides spectral, temporal, and spatial information and a significant reduction in the acquired dataset by combining a single-pixel camera and a conventional pixelated detector [Charge-Coupled Device (CCD) camera] with compressive sensing and data fusion techniques. We take advantage of the proposed versatile design to demonstrate a non-mechanical zoom capability, which allows us to reconstruct with high resolution a region of interest (ROI) smaller than the field-of-view (FOV). We propose and characterize the experimental system, and we demonstrate its potential on cell samples.
II. MATERIALS AND METHODS
Section:
ChooseTop of pageABSTRACTI. INTRODUCTIONII. MATERIALS AND METHODS <<III. RESULTS AND DISCUSSI...IV. CONCLUSION AND FUTURE...SUPPLEMENTARY MATERIALREFERENCESThe modulated light is collected by a 2 inches achromatic doublet lens (L2, f = 200 mm, AC508-200-A, Thorlabs, Inc.), reflected by a dichroic mirror (DM, cut-on 550 nm, DMLP550R, Thorlabs, Inc.) toward an objective (Obj., Plan N 40× NA 0.65 infinity corrected, Olympus Corp.), and thus, sent to the specimen.
In the detection path, the fluorescence is selected by a long-pass filter at 550 nm (F1, FELH0550, Thorlabs, Inc.) and can be addressed to a spectrometer (Acton SP-2151i, Princeton Instruments, Inc.) by means of a lens (L3, f = 50 mm) and an aperture-converting fiber bundle (from 3 mm diameter to 1 × 10 mm2). The spectrometer includes a diffraction grating (600 lines/mm) that disperses the light toward a 16-channel PMT detector (PML-16-1, Becker & Hickl GmbH) whose channels have a bandwidth of ∼9 nm FWHM each. Different PMT channels were corrected for spectral efficiencies using a commercial spectrometer (Hamamatsu TM-VIS/NIR C10083CA-2100) as a reference. The detected signal is then processed with a TCSPC board (SPC-130-EM, Becker & Hickl GmbH) with 4096 temporal channels. Patterns are pre-loaded on the DMD memory and then projected for a pre-determined exposure time. A trigger signal generated by the DMD allows us to synchronize the TCSPC acquisition with each pattern. The temporal Instrumental Response Function (IRF) is acquired by means of a reflecting surface (a silicon wafer) at the sample plane, without the long-pass filter F1. All the 16 PMT channels show an IRF width of less than 180 ps FWHM.
With a flip mirror (FM), light can be delivered to a 16-bit 512 × 512 cooled CCD camera (VersArray 512, Princeton Instruments) through an achromatic doublet lens (L4, f = 200 mm). The camera is used both for focusing operations and for acquiring a high-resolution image (256 × 256 pixels) of the sample. Another filter (F2, FESH0700, Thorlabs, Inc.) is placed in front of the camera to make this detector work within the same spectral band of the 16-channel PMT.
The acquisition with SPC has the following mathematical formulation: mM×1 = WM×NxN×1, with xN×1 being the target to be imaged reshaped into a column array, WM×N being the measurement matrix, and mM×1 being the vector of the measurements. In this work, WM×N is built from the first M rows of the scrambled Hadamard matrix, which is obtained, in turn, from random permutations of columns and rows of the Hadamard matrix HN×N with N = 1024.39,4039. M. Harwit and N. J. A. Sloane, Hadamard Transform Optics (Academic Press, 1979).40. L. Gan, T. T. Do, and T. D. Tran, in 2008 16th European Signal Processing Conference (IEEE, 2008), pp. 1–5. This means that the DMD modulates the light according to the rows of WM×N, reshaped in a collection of M patterns whose size is N = 32 × 32. We substituted the native (+1, −1) matrix elements with (+1, 0) so that the patterns can be encoded onto a DMD. Moreover, scrambled Hadamard patterns are pseudo-random and, thus, well suited for CS. Each pattern contains 1 and 0 elements (in the same amount), except for one pattern that corresponds to a uniform illumination (only 1 value): to prevent the latter from saturating the dynamic range of the detector, a measurement is performed with the complementary of another pattern and the two are subsequently added together.4141. N. Huynh, E. Zhang, M. Betcke, S. Arridge, P. Beard, and B. Cox, Optica 3, 26 (2016). https://doi.org/10.1364/optica.3.000026 The spectral and temporal dimensions of each element of mM×1 are implicit, as they are acquired in parallel.With the choice of a 40× objective, each pattern illuminates an area of about 150 × 150 μm2, which is quantized with 32 × 32 pixels (each pixel involves 16 × 16 DMD mirrors). However, the setup design offers a handy zoom feature to increase the spatial resolution:2828. L. Klein, A. S. Kristoffersen, J. Touš, and K. Žídek, Opt. Express 30, 15708 (2022). https://doi.org/10.1364/oe.455049 by using a calibrated and registered camera image (e.g., the one obtained at the end of the focusing operations), a smaller area can be selected, and therein, patterns can be projected by simply adjusting the codification of micromirrors in the DMD. Hence, 32 × 32 pixels are exploited over different fields of view. This represents a method for multiscale imaging, which preserves the image’s aspect ratio between various zoom levels and does not require any change in the optical setup.When the number of patterns M equals the desired number of pixels N, the resulting image can be reconstructed by a straightforward matrix inversion. If MN patterns have been collected, the dataset mM×1 requires a decoding step into the pixel’s space with a reconstruction algorithm. We chose the total variation minimization by augmented Lagrangian and alternating direction algorithms (TVAL3),4242. C. Li, W. Yin, H. Jiang, and Y. Zhang, Comput. Optim. Appl. 56, 507 (2013). https://doi.org/10.1007/s10589-013-9576-1 a state-of-the-art solver in CS. A background noise subtraction is performed on the experimental data within each spectral channel, considering the mean value of the time-resolved signal over 1 ns before the arrival of the pulse; then, a low-resolution image (32 × 32) is reconstructed for each time bin and each spectral channel, leading to a 4D hypercube yspc. Further details on the use and settings of TVAL3 can be found in the supplementary material.yspc can be fused with the CCD camera image yccd, which has a higher spatial resolution (for every pixel in yspc, there are 64 in yccd), to reconstruct a high-resolution 4D hypercube named X. This has been obtained with a variational approach through the minimization of the following merit function FX:3636. F. Soldevila, A. J. M. Lenz, A. Ghezzi, A. Farina, C. D’Andrea, and E. Tajahuerce, Opt. Lett. 46, 4312 (2021). https://doi.org/10.1364/ol.434127FX=12ε‖STX−yccd‖22+12‖RLX−yspc‖22+12δ‖RGX−ỹspc‖22,where S is an operator that performs the integration of the multidimensional variable across the spectral dimension and, analogously, T acts by integrating across the temporal dimension (i.e., S and T applied to X lead to an image of the same dimension of yccd), RL is a spatial down-sampling operator, and RG integrates the space dimension to get a global wavelength–time map. ỹspc is the wavelength–time map that is obtained from yspc after it has been integrated over the spatial dimension. ɛ and δ are two hyperparameters that are tuned to find the best trade-off in minimizing the three contributions of FX. The first term expresses the data fidelity of the reconstructed data to the CCD image, the second one expresses the data fidelity of the down-sampled reconstructed data, and the third term expresses the global fidelity of the time–wavelength map. Minimization is obtained through a conjugated-gradient descent with line search.36,4336. F. Soldevila, A. J. M. Lenz, A. Ghezzi, A. Farina, C. D’Andrea, and E. Tajahuerce, Opt. Lett. 46, 4312 (2021). https://doi.org/10.1364/ol.43412743. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes: The Art of Scientific Computing, 3rd ed. (Cambridge University Press, 2007). A background noise subtraction is required for yccd, too. The camera image has been used to create a pixel mask for yspc, based on a fixed threshold, to exclude noisy pixels mainly lying in the background. The images from CCD and the SPC data are normalized to have the same energy.Two types of samples have been measured to validate the system’s capabilities. First, the imaging properties have been characterized with fluorescent beads (4 µm diameter each) deposited on a microscope slide (FocalCheck F36909, Invitrogen). Each bead is stained with four different fluorophores; two of them emit within the detection spectral window. The second is a sample of fixed HEK-293 (ATCC) cells. Cells are treated with poly(3-hexylthiophene-2,5-diyl) nanoparticles (P3HT NPs) (abs: 500 nm; em.: 650 nm), and their actin filaments are stained with phalloidin conjugated to Alexa-Fluor 488 (ALF, abs.: 490 nm and em.: 520 nm, Sigma-Aldrich). Additional information about the sample preparation is reported in the supplementary material. These samples are quite interesting due to the possible role played by nanoparticles at the interface with living cells. Organic and inorganic nanoparticles have proven to be a valuable tool for optical cell stimulation, biosensing, and drug delivery, among others, even reaching clinical trials for various applications.44–4644. F. Benfenati and G. Lanzani, Nat. Rev. Mater. 6, 1 (2021). https://doi.org/10.1038/s41578-020-00267-845. H. Altug, S.-H. Oh, S. A. Maier, and J. Homola, Nat. Nanotechnol. 17, 5 (2022). https://doi.org/10.1038/s41565-021-01045-546. A. C. Anselmo and S. Mitragotri, Bioeng. Transl. Med. 1, 10 (2016). https://doi.org/10.1002/btm2.10003III. RESULTS AND DISCUSSION
Section:
ChooseTop of pageABSTRACTI. INTRODUCTIONII. MATERIALS AND METHODSIII. RESULTS AND DISCUSSI... <<IV. CONCLUSION AND FUTURE...SUPPLEMENTARY MATERIALREFERENCESOne control microscope slide containing HEK cells marked with ALF fluorophore and another one containing P3HT alone (no cells) have been measured with the CCD camera [Figs. S3(a) and S3(d)] and a time-resolved multichannel PMT [Figs. S3(b) and S3(e); Figs. S3(c) and S3(f)], in order to spectrally and temporally characterize the fluorophores. ALF is characterized by a mono-exponential behavior with a lifetime of about 3.14 ns and P3HT by a lifetime shorter than 200 ps.
By means of the preliminary measurements, ALF can be highlighted by selecting a spectral band between 580 and 595 nm and a time gate of 7 ns, starting 1.5 ns after the fluorescence peak. Similarly, P3HT can be better discriminated with a spectral band 650–700 nm and a time gate of 1 ns from the peak. Data fusion is applied as described above, and the resulting two gated-and-filtered images are assigned to the corresponding spectral range and are visualized in Fig. 3(c). Multidimensionality and data fusion can help in showing a synthetic representation of the specimen, which is useful for a rapid screening or deciding where to proceed with further measurements. These images allow us to better localize the aggregates of P3HT nanoparticles (the red spots) inside the cells, and this is quite relevant for studying their spatial and temporal internalization.To increase the spatial resolution and, hence, to discriminate smaller P3HT clusters, SPC measurement is repeated in a zoomed FOV [marked in red in Fig. 3(b)], of size 40 × 40 μm2. We used 77 patterns with 16 × 16 pixels (CR = 70%), projected for 100 ms each (total time 8 s), and the dataset is fused with the camera image of Fig. 3(a), resulting in Fig. 3(d). From the two selected regions, circled with blue and orange, the spectral [Fig. 3(e)] and temporal [Fig. 3(f)] signals are extracted. While the low signal from the orange region does not allow us to deduce the content based on the emission spectrum, the temporal profile shows a very clear presence of a short lifetime, compatible with the presence of P3HT. Conversely, the blue region has a stronger signal in the leftmost spectral channels and a slower decay, corresponding to a prevalent contribution of ALF.This set of measurements demonstrates the capability of the setup to discriminate nanoparticles in a complex environment, such as the biological one. The ability to collect multispectral information from nanoparticles could be used in the field of biosensing exploiting lifetimes-related information to enhance specificity.47,4847. B. Martín-Gracia, A. Martín-Barreiro, C. Cuestas-Ayllón, V. Grazú, A. Line, A. Llorente, J. M. de la Fuente, and M. Moros, J. Mater. Chem. B 8, 6710 (2020). https://doi.org/10.1039/d0tb00861c48. M. Holzinger, A. Le Goff, and S. Cosnier, Front. Chem. 2, 63 (2014). https://doi.org/10.3389/fchem.2014.00063 Furthermore, nanoparticles can be sensing/triggering elements for biological hubs, such as synapses and intracellular organelles.IV. CONCLUSION AND FUTURE PERSPECTIVES
Section:
ChooseTop of pageABSTRACTI. INTRODUCTIONII. MATERIALS AND METHODSIII. RESULTS AND DISCUSSI...IV. CONCLUSION AND FUTURE... <<SUPPLEMENTARY MATERIALREFERENCESIn this work, we have proposed and experimentally validated a time-resolved multispectral fluorescence microscope based on single-pixel camera detection and its symbiotic use with compressive sensing and data fusion algorithms. We have shown that this integration between hardware and algorithms, at the base of the computational imaging approach, is an effective strategy for reducing the acquisition time while preserving multidimensional images with high spatial, temporal, and spectral resolutions. In fact, while spectral and temporal information is acquired in parallel and fully sampled, for what concerns the spatial dimensions, we have reported images with 32 × 32 pixels obtained from a low percentage (CR = 80% for the cell sample) of measurements and whose pixel number is then increased by exploiting a camera image up to 256 × 256. In this way, the measurement duration is much shorter (about 0.3%) than that of a fully sampled SPC system with the same number of pixels in the final image and the same integration time for photon counting. This approach is important when a short measurement time is a priority to avoid bleaching and cell damage or to capture dynamic phenomena. Moreover, it allows a handy zoom capability without changing the magnification optics. Zooming allows us to quickly observe FOVs of different sizes and, consequently, different spatial resolutions, with relevant opportunities in biological applications.
The use of multiple detectors to observe the same sample and perform data fusion certainly raises the issue of different spectral efficiencies. Since the PMT data are spectrally calibrated and the response of a CCD sensor is almost flat within the spectral window we used, this aspect was not critical at this time, but, in general, it might be useful to allow the S operator of data fusion to account for the spectral efficiencies. The use of a DMD, which is a standard component of common light projectors, a state-of-the-art cooled camera, and a linear array of time-resolved detectors, makes this design less expensive, less complex, and more sensitive than pixelated detector-based systems.
Future developments to improve the performance, in terms of acquisition time reduction and higher temporal, spectral, and spatial resolutions, involve hardware and software co-design. In the future, algorithms for the optimization of the hyperparameters of the data fusion can be applied,49,5049. S. Liu and J. Zhang, Acta Geophys. 69, 809 (2021). https://doi.org/10.1007/s11600-021-00569-750. Z. Petrášek, HJ. Eckert, and K. Kemnitz, Photosynth. Res. 102, 157 (2009). https://doi.org/10.1007/s11120-009-9444-0 in order to automatize the process, which currently relies on manual tuning. A possible strategy to further reduce the integration time relies on technological improvements of the detection chain by exploiting more efficient TCSPC systems,51,5251. S. Farina, I. Labanca, G. Acconcia, A. Ghezzi, A. Farina, C. D’Andrea, and I. Rech, Opt. Lett. 47, 82 (2022). https://doi.org/10.1364/ol.44481552. J. E. Sorrells, R. R. Iyer, L. Yang, E. M. Martin, G. Wang, H. Tu, M. Marjanovic, and S. A. Boppart, ACS Photonics 9, 2748 (2022). https://doi.org/10.1021/acsphotonics.2c00505 more sensitive detectors, or higher parallelism in multichannel detector architecture.2929. A. Ghezzi, A. Farina, A. Bassi, G. Valentini, I. Labanca, G. Acconcia, I. Rech, and C. D’Andrea, Opt. Lett. 46, 1353 (2021). https://doi.org/10.1364/ol.419381 Concerning the reconstruction algorithm, recently, machine learning has shown to be effective3232. M. Ochoa, A. Rudkouskaya, R. Yao, P. Yan, M. Barroso, and X. Intes, Biomed. Opt. Express 11, 5401 (2020). https://doi.org/10.1364/boe.396771 for high CRs and lower computation times. Different patterns, as an alternative to those based on the Hadamard matrix,30,32,5330. M. Legrand, A. Bercegol, L. Lombez, J.-F. Guillemoles, and D. Ory, in Proceeding of SPIE 11681, Physics, Simulation, and Photonic Engineering of Photovoltaic Devices X (SPIE, 2021), pp. 43–48.32. M. Ochoa, A. Rudkouskaya, R. Yao, P. Yan, M. Barroso, and X. Intes, Biomed. Opt. Express 11, 5401 (2020). https://doi.org/10.1364/boe.39677153. A. L. Mur, P. Leclerc, F. Peyrin, and N. Ducros, Opt. Express 29, 17097 (2021). https://doi.org/10.1364/oe.424228 can be used to increase the CR and spatial details, such as wavelet, discrete cosines, or Morlet.54,5554. M. Ochoa, Q. Pian, R. Yao, N. Ducros, and X. Intes, Opt. Lett. 43, 4370 (2018). https://doi.org/10.1364/ol.43.00437055. P. Wijesinghe, A. Escobet-Montalbán, M. Chen, P. R. T. Munro, and K. Dholakia, Opt. Lett. 44, 4981 (2019). https://doi.org/10.1364/ol.44.004981 In particular, optimal patterns can be designed with deep learning5656. G. M. Gibson, S. D. Johnson, and M. J. Padgett, Opt. Express 28, 28190 (2020). https://doi.org/10.1364/oe.403195 or even with adaptive methods.5757. F. Rousset, N. Ducros, A. Farina, G. Valentini, C. D’Andrea, and F. Peyrin, IEEE Trans. Comput. Imaging 3, 36 (2017). https://doi.org/10.1109/tci.2016.2637079 Moreover, a static/dynamic combination of the compressed dataset with global analysis and linear fast-fit approaches5858. A. Ghezzi, A. Farina, G. Valentini, A. Bassi, and C. D’Andrea, in Proceeding of SPIE 11966, Three-Dimensional and Multidimensional Microscopy: Image Acquisition and Processing XXIX (SPIE, 2022), pp. 1–6. can provide a faster analysis toward real-time applications. In conclusion, we believe that the proposed experimental system represents an ideal platform for the development of advanced computational imaging approaches exploiting novel algorithms.
Comments (0)