Color and multispectral imaging and computational appearance
For personal use only.
Grillini F, Aksas L, Lapray P-J, Foulonneau A, Thomas J-B, George S and Bigué L (2024), "Relationship between reflectance and degree of polarization in the VNIR-SWIR: A case study on art paintings with polarimetric reflectance imaging spectroscopy", PLOS ONE., 05, 2024. Vol. 19(5), pp. 1-21. Public Library of Science. |
Abstract: We study the relationship between reflectance and the degree of linear polarization of radiation that bounces off the surface of an unvarnished oil painting. We design a VNIR-SWIR (400 nm to 2500 nm) polarimetric reflectance imaging spectroscopy setup that deploys unpolarized light and allows us to estimate the Stokes vector at the pixel level. We observe a strong negative correlation between the S0 component of the Stokes vector (which can be used to represent the reflectance) and the degree of linear polarization in the visible interval (average -0.81), while the correlation is weaker and varying in the infrared range (average -0.50 in the NIR range between 780 and 1500 nm, and average -0.87 in the SWIR range between 1500 and 2500 nm). By tackling the problem with multi-resolution image analysis, we observe a dependence of the correlation on the local complexity of the surface. Indeed, we observe a general trend that strengthens the negative correlation for the effect of artificial flattening provoked by low image resolutions. |
BibTeX:
@article{2024PlosOne, author = {Grillini, Federico AND Aksas, Lyes AND Lapray, Pierre-Jean AND Foulonneau, Alban AND Thomas, Jean-Baptiste AND George, Sony AND Bigué, Laurent}, title = {Relationship between reflectance and degree of polarization in the VNIR-SWIR: A case study on art paintings with polarimetric reflectance imaging spectroscopy}, journal = {PLOS ONE}, publisher = {Public Library of Science}, year = {2024}, volume = {19}, number = {5}, pages = {1-21}, url = {https://jbthomas.org/Journals/2024PlosOne.pdf}, doi = {10.1371/journal.pone.0303018} } |
Erba I, Buzzelli M, Thomas J-B, Hardeberg JY and Schettini R (2024), "Improving RGB illuminant estimation exploiting spectral average radiance", J. Opt. Soc. Am. A., Mar, 2024. Vol. 41(3), pp. 516-526. Optica Publishing Group. |
Abstract: We introduce a method that enhances RGB color constancy accuracy by combining neural network and k-means clustering techniques. Our approach stands out from previous works because we combine multispectral and color information together to estimate illuminants. Furthermore, we investigate the combination of the illuminant estimation in the RGB color and in the spectral domains, as a strategy to provide a refined estimation in the RGB color domain. Our investigation can be divided into three main points: (1)&x00A0;identify the spatial resolution for sampling the input image in terms of RGB color and spectral information that brings the highest performance; (2)&x00A0;determine whether it is more effective to predict the illuminant in the spectral or in the RGB color domain, and finally, (3)&x00A0;assuming that the illuminant is in fact predicted in the spectral domain, investigate if it is better to have a loss function defined in the RGB color or spectral domain. Experimental results are carried out on NUS: a standard dataset of multispectral radiance images with an annotated spectral global illuminant. Among the several considered options, the best results are obtained with a model trained to predict the illuminant in the spectral domain using an RGB color loss function. In terms of comparison with the state of the art, this solution improves the recovery angular error metric by 66% compared to the best tested spectral method, and by 41% compared to the best tested RGB method. |
BibTeX:
@article{2024JOSA, author = {Ilaria Erba and Marco Buzzelli and Jean-Baptiste Thomas and Jon Yngve Hardeberg and Raimondo Schettini}, title = {Improving RGB illuminant estimation exploiting spectral average radiance}, journal = {J. Opt. Soc. Am. A}, publisher = {Optica Publishing Group}, year = {2024}, volume = {41}, number = {3}, pages = {516--526}, url = {http://jbthomas.org/Journals/2024JOSA.pdf}, doi = {10.1364/JOSAA.510159} } |
Nguyen M, Thomas J-B and Farup I (2024), "Exploring Imaging Methods for In Situ Measurements of the Visual Appearance of Snow", Geosciences. Vol. 14(2) |
Abstract: We explored imaging methods to perform in situ field measurements of physical correlates of the visual appearance of snow. Measurements were performed at three locations in Norway between February and March 2023. We used a method to estimate the absorption and scattering coefficients of snow using only one measurement of reflectance captured by the Dia-Stron© TLS850 translucency meter. We also measured the sparkle indicators (contrast and density of sparkle spots) from digital images of snow. The contrast of sparkle spots can be defined as the median value of all the pixels identified as sparkle spots by an algorithm, and the density of sparkle spots is the number of sparkle spots in a selected area of the image. In the case of the sparkle of the snow surface, we found that there is a potential to use the sparkle indicators for classifying the grain types, but it requires a larger data set coupled with expert labelling to define the type of snow. For the absorption and scattering properties, the measurements confirm the fact that snow is a weakly absorptive and highly scattering material when modelling light interactions in the snow. No correlation between the optical properties and sparkle could be found in our data. |
BibTeX:
@article{2024Geosciences, author = {Nguyen, Mathieu and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Exploring Imaging Methods for In Situ Measurements of the Visual Appearance of Snow}, journal = {Geosciences}, year = {2024}, volume = {14}, number = {2}, url = {http://jbthomas.org/Journals/2024Geosciences.pdf}, doi = {10.3390/geosciences14020035} } |
Russo S, Granget E, Malefakis A, Brambilla L, Thomas J-B and Joseph E (2024), "About metal soaps in ethnographic collections", CeROArt. Vol. 2024(13) |
Abstract: This article discusses the value of ethnographic objects in museums and collections, and the challenges of preserving them due to their complex composition and poly-materiality. Specifically, the study focuses on the formation of metal soaps on objects made of metal in contact with leather or wood parts. The study proposes a collaborative approach between conservators and scientists to identify and understand the metal soaps present on a selection of objects from the Ethnographic Museum at the University of Zurich. The authors wish to raise awareness about material alteration in ethnographic collections and museum contexts and to promote communication between different actors in the field, while enabling conservation professionals to confirm the formation of a class of degradation products in the absence of specific analytical techniques. Keywords: ethnographic collections, metal soaps, composite objects, conservation, µATR-FTIR. |
BibTeX:
@article{2024CEROART, author = {Russo, Silvia and Granget, Elodie and Malefakis, Alexis and Brambilla, Laura and Thomas, Jean-Baptiste and Joseph, Edith}, title = {About metal soaps in ethnographic collections}, journal = {CeROArt}, year = {2024}, volume = {2024}, number = {13}, url = {https://jbthomas.org/Journals/2024CEROART.pdf}, doi = {doi.org/10.4000/12ld6} } |
Fsian AN, Thomas J-B, Hardeberg JY and Gouton P (2024), "Spectral Reconstruction from RGB Imagery: A Potential Option for Infinite Spectral Data?", Sensors. Vol. 24(11) |
Abstract: Spectral imaging has revolutionisedvarious fields by capturing detailed spatial and spectral information. However, its high cost and complexity limit the acquisition of a large amount of data to generalise processes and methods, thus limiting widespread adoption. To overcome this issue, a body of the literature investigates how to reconstruct spectral information from RGB images, with recent methods reaching a fairly low error of reconstruction, as demonstrated in the recent literature. This article explores the modification of information in the case of RGB-to-spectral reconstruction beyond reconstruction metrics, with a focus on assessing the accuracy of the reconstruction process and its ability to replicate full spectral information. In addition to this, we conduct a colorimetric relighting analysis based on the reconstructed spectra. We investigate the information representation by principal component analysis and demonstrate that, while the reconstruction error of the state-of-the-art reconstruction method is low, the nature of the reconstructed information is different. While it appears that the use in colour imaging comes with very good performance to handle illumination, the distribution of information difference between the measured and estimated spectra suggests that caution should be exercised before generalising the use of this approach. |
BibTeX:
@article{2024bSensors, author = {Fsian, Abdelhamid N. and Thomas, Jean-Baptiste and Hardeberg, Jon Y. and Gouton, Pierre}, title = {Spectral Reconstruction from RGB Imagery: A Potential Option for Infinite Spectral Data?}, journal = {Sensors}, year = {2024}, volume = {24}, number = {11}, url = {https://jbthomas.org/Journals/2024bSensors.pdf}, doi = {10.3390/s24113666} } |
Askary H, Hardeberg JY and Thomas J-B (2024), "Raw Spectral Filter Array Imaging for Scene Recognition", Sensors. Vol. 24(6) |
Abstract: Scene recognition is the task of identifying the environment shown in an image. Spectral filter array cameras allow for fast capture of multispectral images. Scene recognition in multispectral images is usually performed after demosaicing the raw image. Along with adding latency, this makes the classification algorithm limited by the artifacts produced by the demosaicing process. This work explores scene recognition performed on raw spectral filter array images using convolutional neural networks. For this purpose, a new raw image dataset is collected for scene recognition with a spectral filter array camera. The classification is performed using a model constructed based on the pretrained Places-CNN. This model utilizes all nine channels of spectral information in the images. A label mapping scheme is also applied to classify the new dataset. Experiments are conducted with different pre-processing steps applied on the raw images and the results are compared. Higher-resolution images are found to perform better even if they contain mosaic patterns. |
BibTeX:
@article{2024aSensors, author = {Askary, Hassan and Hardeberg, Jon Yngve and Thomas, Jean-Baptiste}, title = {Raw Spectral Filter Array Imaging for Scene Recognition}, journal = {Sensors}, year = {2024}, volume = {24}, number = {6}, url = {http://jbthomas.org/Journals/2024aSensors.pdf}, doi = {10.3390/s24061961} } |
Grillini F, Thomas J-B and George S (2023), "Logistic splicing correction for VNIR & SWIR reflectance imaging spectroscopy", Opt. Lett.., Jan, 2023. Vol. 48(2), pp. 403-406. Optica Publishing Group. |
Abstract: In the field of spectroscopy, a splicing correction is a process by which two spectra captured with different sensors in adjacent or overlapping electromagnetic spectrum ranges are smoothly connected. In our study, we extend this concept to the case of reflectance imaging spectroscopy in the visible & near-infrared (VNIR) and short-wave infrared (SWIR), accounting for additional sources of noise that arise at the pixel level. The proposed approach exploits the adaptive fitting of a logistic function to compute correcting coefficients that harmonize the two spectral sets. This short Letter addresses usage conditions and compares results against the existing state of the art. |
BibTeX:
@article{2023OL, author = {Federico Grillini and Jean-Baptiste Thomas and Sony George}, title = {Logistic splicing correction for VNIR & SWIR reflectance imaging spectroscopy}, journal = {Opt. Lett.}, publisher = {Optica Publishing Group}, year = {2023}, volume = {48}, number = {2}, pages = {403--406}, url = {http://jbthomas.org/Journals/2023OL.pdf}, doi = {10.1364/OL.478691} } |
Russo S, Brambilla L, Thomas JB and Joseph E (2023), "But aren’t all soaps metal soaps? A review of applications, physico-chemical properties of metal soaps and their occurrence in cultural heritage studies", Heritage Science., Aug., 2023. Vol. 11(172), pp. 18. Springer. |
Abstract: Metal soaps, the organic salts resulting from the interaction of fatty acids and metal cations, arouse interest in the scientific field because of their versatility in a great range of chemical applications as well as because of the mechanism of their formation during degradation processes. This article presents a review of the synthetic pathways used to produce metal soaps, their relevant physico-chemical properties, and how these reflect in their applications. Common industrial uses of metal soaps are reported, with a particular focus on those applications, such as cosmetics, paints, and coatings, that have an impact on the cultural heritage field. In addition, the occurrence of metal soaps in cultural heritage studies is presented, ranging from archaeological and ethnographic artefacts to fine art objects, and discussed per class of materials. An overview of the presence or absence of metal soaps in historical artefacts due to the interaction of metal parts or mineral pigments with fatty acids is given herein. This collection shows a variety of situations in which metal soaps—particularly lead, zinc and copper soaps—can form on composite objects made of different materials such as wood, leather and fatty-acid-containing materials (e.g., waxes), in the presence of metal, metal alloys or pigments. |
BibTeX:
@article{2023HeritageScience, author = {Russo, Silvia and Brambilla, Laura and Thomas, Jean Baptiste and Joseph, Edith}, title = {But aren’t all soaps metal soaps? A review of applications, physico-chemical properties of metal soaps and their occurrence in cultural heritage studies}, journal = {Heritage Science}, publisher = {Springer}, year = {2023}, volume = {11}, number = {172}, pages = {18}, url = {http://jbthomas.org/Journals/2023HeritageScience.pdf}, doi = {10.1186/s40494-023-00988-3} } |
Nguyen M, Thomas J-B and Farup I (2023), "Measuring the Optical Properties of Highly Diffuse Materials", Sensors. Vol. 23(15) |
Abstract: Measuring the optical properties of highly diffuse materials is a challenge as it could be related to the white colour or an oversaturation of pixels in the acquisition system. We used a spatially resolved method and adapted a nonlinear trust-region algorithm to the fit Farrell diffusion theory model. We established an inversion method to estimate two optical properties of a material through a single reflectance measurement: the absorption and the reduced scattering coefficient. We demonstrate the validity of our method by comparing results obtained on milk samples, with a good fitting and a retrieval of linear correlations with the fat content, given by R2 scores over 0.94 with low p-values. The values of absorption coefficients retrieved vary between 1 × 10−3 and 8 × 10−3 mm−1, whilst the values of the scattering coefficients obtained from our method are between 3 and 8 mm−1 depending on the percentage of fat in the milk sample, and under the assumption of the anisotropy factor g>0.8. We also measured and analyzed the results on white paint and paper, although the paper results were difficult to relate to indicators. Thus, the method designed works for highly diffuse isotropic materials. |
BibTeX:
@article{2023bSensors, author = {Nguyen, Mathieu and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Measuring the Optical Properties of Highly Diffuse Materials}, journal = {Sensors}, year = {2023}, volume = {23}, number = {15}, url = {http://jbthomas.org/Journals/2023bSensors.pdf}, doi = {10.3390/s23156853} } |
Elezabi O, Guesney-Bodet S and Thomas J-B (2023), "Impact of Exposure and Illumination on Texture Classification Based on Raw Spectral Filter Array Images", Sensors. Vol. 23(12) |
Abstract: Spectral Filter Array cameras provide a fast and portable solution for spectral imaging. Texture classification from images captured with such a camera usually happens after a demosaicing process, which makes the classification performance rely on the quality of the demosaicing. This work investigates texture classification methods applied directly to the raw image. We trained a Convolutional Neural Network and compared its classification performance to the Local Binary Pattern method. The experiment is based on real SFA images of the objects of the HyTexiLa database and not on simulated data as are often used. We also investigate the role of integration time and illumination on the performance of the classification methods. The Convolutional Neural Network outperforms other texture classification methods even with a small amount of training data. Additionally, we demonstrated the model’s ability to adapt and scale for different environmental conditions such as illumination and exposure compared to other methods. In order to explain these results, we analyze the extracted features of our method and show the ability of the model to recognize different shapes, patterns, and marks in different textures. |
BibTeX:
@article{2023aSensors, author = {Elezabi, Omar and Guesney-Bodet, Sebastien and Thomas, Jean-Baptiste}, title = {Impact of Exposure and Illumination on Texture Classification Based on Raw Spectral Filter Array Images}, journal = {Sensors}, year = {2023}, volume = {23}, number = {12}, url = {http://jbthomas.org/Journals/2023aSensors.pdf}, doi = {10.3390/s23125443} } |
Colantoni P, Thomas J-B, Hébert M, Caissard J-C and Trémeau A (2022), "Web-Based Interaction and Visualization of Spectral Reflectance Images: Application to Vegetation Inspection", SN Computer Science. Vol. 3, pp. 12. |
Abstract: Visualization of spectral images and interaction with them is still a challenge. We demonstrate an edge-computing, web-technology based solution to handle spectral mage data and allow real-time interaction with it. The solution is flexible, efficient and applicable. It includes visualization strategies based on color science, image processing and statistics. An example of use is provided through a collaboration with a domain knowledge expert for an application related to vegetation inspection. |
BibTeX:
@article{2022SNCS, author = {Colantoni, Philippe and Thomas, Jean-Baptiste and Hébert, Mathieu and Caissard, Jean-Claude and Trémeau, Alain}, editor = {Springer}, title = {Web-Based Interaction and Visualization of Spectral Reflectance Images: Application to Vegetation Inspection}, journal = {SN Computer Science}, year = {2022}, volume = {3}, pages = {12}, note = {eng}, url = {http://jbthomas.org/Journals/2022SNCS.pdf}, doi = {10.1007/s42979-021-00870-8} } |
Prieur C, Rabatel A, Thomas J-B, Farup I and Chanussot J (2022), "Machine Learning Approaches to Automatically Detect Glacier Snow Lines on Multi-Spectral Satellite Images", Remote Sensing. Vol. 14(16) |
Abstract: Documenting the inter-annual variability and the long-term trend of the glacier snow line altitude is highly relevant to document the evolution of glacier mass changes. Automatically identifying the snow line on glaciers is challenging; recent developments in machine learning approaches show promise to tackle this issue. This manuscript presents a proof of concept of machine learning approaches applied to multi-spectral images to detect the snow line and quantify its average altitude. The tested approaches include the combination of different image processing and classification methods, and takes into account cast shadows. The efficiency of these approaches is evaluated on mountain glaciers in the European Alps by comparing the results with manually annotated data. Solutions provided by the different approaches are robust when compared to the ground truth’s snow lines, with a Pearson’s correlation ranging from 79% to 96% depending on the method. However, the tested approaches may fail when snow lines are not continuous or exhibit a strong change of elevation. The major advantage over the state of the art is that the proposed approach does not require one calibration per glacier. |
BibTeX:
@article{2022RS, author = {Prieur, Colin and Rabatel, Antoine and Thomas, Jean-Baptiste and Farup, Ivar and Chanussot, Jocelyn}, title = {Machine Learning Approaches to Automatically Detect Glacier Snow Lines on Multi-Spectral Satellite Images}, journal = {Remote Sensing}, year = {2022}, volume = {14}, number = {16}, url = {http://jbthomas.org/Journals/2022RS.pdf}, doi = {10.3390/rs14163868} } |
Gigilashvili D, Urban P, Thomas J-B, Pedersen M and Hardeberg JY (2022), "The Impact of Optical and Geometrical Thickness on Perceived Translucency Differences", Journal of Perceptual Imaging,. Vol. 5 |
Abstract: In this work we study the perception of suprathreshold translucency differences to expand the knowledge about material appearance perception in imaging and computer graphics, and 3D printing applications. Translucency is one of the most considerable appearance attributes that significantly affects the look of objects and materials. However, the knowledge about translucency perception remains limited. Even less is known about the perception of translucency differences between materials. We hypothesize that humans are more sensitive to small changes in absorption and scattering coefficients when optically thin materials are examined and when objects have geometrically thin parts. To test these hypotheses, we generated images of objects with different shapes and subsurface scattering properties and conducted psychophysical experiments with these visual stimuli. The analysis of the experimental data supports these hypotheses and based on post experiment comments made by the observers, we argue that the results could be a demonstration of a fundamental difference between translucency perception mechanisms in see-through and non-see-through objects and materials. |
BibTeX:
@article{2022JPI, author = {Gigilashvili, Davit and Urban, Philipp and Thomas, Jean-Baptiste and Pedersen, Marius and Hardeberg, Jon Yngve}, title = {The Impact of Optical and Geometrical Thickness on Perceived Translucency Differences}, journal = {Journal of Perceptual Imaging,}, year = {2022}, volume = {5}, url = {http://jbthomas.org/Journals/2022JPI.pdf}, doi = {10.2352/J.Percept.Imaging.2022.5.000501} } |
Bozorgian A, Pedersen M and Thomas J-B (2022), "Modification and evaluation of the peripheral contrast sensitivityfunction models", J. Opt. Soc. Am. A., Sep, 2022. Vol. 39(9), pp. 1650-1658. Optica Publishing Group. |
Abstract: We propose a series of modifications to the Barten contrastsensitivity function model for peripheral vision based on anatomical andpsychophysical studies. These modifications result in a luminance patterndetection model that could quantitatively describe the extent of veridicalpattern resolution and the aliasing zone. We evaluated our model againstpsychophysical measurements in peripheral vision. Our numerical assessmentshows that the modified Barten leads to lower estimate errors than itsoriginal version. |
BibTeX:
@article{2022JOSA, author = {Aliakbar Bozorgian and Marius Pedersen and Jean-Baptiste Thomas}, title = {Modification and evaluation of the peripheral contrast sensitivityfunction models}, journal = {J. Opt. Soc. Am. A}, publisher = {Optica Publishing Group}, year = {2022}, volume = {39}, number = {9}, pages = {1650--1658}, url = {http://jbthomas.org/Journals/2022JOSA.pdf}, doi = {10.1364/JOSAA.445234} } |
Nguyen M, Thomas J-B and Farup I (2022), "Statistical Analysis of Sparkle in Snow Images", Journal of Imaging Science and Technology. , pp. pp 050404-1 - 050404-11. |
Abstract: Sparkle from snow is a common phenomenon in Nature but not well studied in the literature. We perform a statistical study on digital snow images captured in-situ to analyze sparkle events by using contrast and density of sparkle spots descriptors. The method for measuring sparkles by Ferrero et al. is adapted, tested, and verified to the case of snow. The dataset is divided into three categories representing the type of snow acquired: dense snow, fresh snow, and old snow. Our analysis highlights the link between the sparkle of snow, the nature of snow and its grain structure. Sparkle could thus be a feature used for snow classification. |
BibTeX:
@article{2022JIST, author = {Nguyen, Matthieu and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Statistical Analysis of Sparkle in Snow Images}, journal = {Journal of Imaging Science and Technology}, year = {2022}, pages = {pp 050404-1 - 050404-11}, url = {http://jbthomas.org/Journals/2022JIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2022.66.5.050404} } |
Lapray P-J, Thomas J-B and Farup I (2022), "Bio-Inspired Multimodal Imaging in Reduced Visibility", Frontiers in Computer Science. Vol. 3 |
Abstract: The visual systems found in nature rely on capturing light under different modalities, in terms of spectral sensitivities and polarization sensitivities. Numerous imaging techniques are inspired by this variety, among which, the most famous is color imaging inspired by the trichromacy theory of the human visual system. We investigate the spectral and polarimetric properties of biological imaging systems that will lead to the best performance on scene imaging through haze, i.e., dehazing. We design a benchmark experiment based on modalities inspired by several visual systems, and adapt state-of-the-art image reconstruction algorithms to those modalities. We show the difference in performance of each studied systems and discuss it in front of our methodology and the statistical relevance of our data. |
BibTeX:
@article{2022Frontiers, author = {Lapray, Pierre-Jean and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Bio-Inspired Multimodal Imaging in Reduced Visibility}, journal = {Frontiers in Computer Science}, year = {2022}, volume = {3}, url = {http://jbthomas.org/Journals/2022Frontiers.pdf}, doi = {10.3389/fcomp.2021.737144} } |
Zendagui A, Le Goïc G, Chatoux H, Thomas J-B, Jochum P, Maniglier S and Mansouri A (2022), "Reflectance Transformation Imaging as a Tool for Computer-Aided Visual Inspection", Applied Sciences. Vol. 12(13) |
Abstract: This work investigates the use of Reflectance Transformation Imaging (RTI) rendering for visual inspection. This imaging technique is being used more and more often for the inspection of the visual quality of manufactured surfaces. It allows reconstructing a dynamic virtual rendering of a surface from the acquisition of a sequence of images where only the illumination direction varies. We investigate, through psychometric experimentation, the influence of different essential parameters in the RTI approach, including modeling methods, the number of lighting positions and the measurement scale. In addition, to include the dynamic aspect of perception mechanisms in the methodology, the psychometric experiments are based on a design of experiments approach and conducted on reconstructed visual rendering videos. The proposed methodology is applied to different industrial surfaces. The results show that the RTI approach can be a relevant tool for computer-aided visual inspection. The proposed methodology makes it possible to objectively quantify the influence of RTI acquisition and processing factors on the perception of visual properties, and the results obtained show that their impact in terms of visual perception can be significant. |
BibTeX:
@article{2022ApplScience, author = {Zendagui, Abir and Le Goïc, Gaëtan and Chatoux, Hermine and Thomas, Jean-Baptiste and Jochum, Pierre and Maniglier, Stéphane and Mansouri, Alamin}, title = {Reflectance Transformation Imaging as a Tool for Computer-Aided Visual Inspection}, journal = {Applied Sciences}, year = {2022}, volume = {12}, number = {13}, url = {http://jbthomas.org/Journals/2022ApplScience.pdf}, doi = {10.3390/app12136610} } |
Gigilashvili D, Thomas J-B, Hardeberg JY and Pedersen M (2021), "Translucency perception: A review", Journal of Vision., 08, 2021. Vol. 21(8), pp. 4-4. |
Abstract: Translucency is an optical and a perceptual phenomenon that characterizes subsurface light transport through objects and materials. Translucency as an optical property of a material relates to the radiative transfer inside and through this medium, and translucency as a perceptual phenomenon describes the visual sensation experienced by humans when observing a given material under given conditions. The knowledge about the visual mechanisms of the translucency perception remains limited. Accurate prediction of the appearance of the translucent objects can have a significant commercial impact in the fields such as three-dimensional printing. However, little is known how the optical properties of a material relate to a perception evoked in humans. This article overviews the knowledge status about the visual perception of translucency and highlights the applications of the translucency perception research. Furthermore, this review summarizes current knowledge gaps, fundamental challenges and existing ambiguities with a goal to facilitate translucency perception research in the future. |
BibTeX:
@article{2021JOV, author = {Gigilashvili, Davit and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Pedersen, Marius}, title = {Translucency perception: A review}, journal = {Journal of Vision}, year = {2021}, volume = {21}, number = {8}, pages = {4-4}, url = {http://jbthomas.org/Journals/2021JOV.pdf}, doi = {10.1167/jov.21.8.4} } |
Gigilashvili D, Thomas J-B, Pedersen M and Hardeberg JY (2021), "On the appearance of objects and materials: Qualitative analysis of experimental observations", Journal of the International Colour Association. Vol. 27, pp. 26-55. |
Abstract: Perception of appearance of different materials and objects is a complex psychophysical phenomenon and its neurophysiological and behavioral mechanisms are far from being fully understood. The various appearance attributes are usually studied separately. In addition, no comprehensive and functional total appearance modelling has been done up-to date. We have conducted experiments using physical objects asking observers to describe the objects and carry out visual tasks. The process has been videotaped and analysed qualitatively using the Grounded Theory Analysis, a qualitative research methodology from social science. In this work, we construct a qualitative model of this data and compare it to material appearance models. The model highlights the impact of the conditions of observation, and the necessity of a reference and comparison for adequate assessment of material appearance. Then we formulate a set of research hypotheses. While our model only describes our data, the hypotheses could be general if they are verified by quantitative studies. In order to assess the potential generalisation of the model, the hypotheses are discussed in context of different quantitative state-of-the-art works. |
BibTeX:
@article{2021cJAIC, author = {Gigilashvili, Davit and Thomas, Jean-Baptiste and Pedersen, Marius and Hardeberg, Jon Yngve}, editor = {International Colour Association}, title = {On the appearance of objects and materials: Qualitative analysis of experimental observations}, journal = {Journal of the International Colour Association}, year = {2021}, volume = {27}, pages = {26--55}, note = {eng}, url = {http://jbthomas.org/Journals/2021cJAIC.pdf} } |
Grillini F, Thomas J-B and George S (2021), "Comparison of Imaging Models for Spectral Unmixing in Oil Painting", Sensors. Vol. 21(7) |
Abstract: The radiation captured in spectral imaging depends on both the complex light–matter interaction and the integration of the radiant light by the imaging system. In order to obtain material-specific information, it is important to define and invert an imaging process that takes into account both aspects. In this article, we investigate the use of several mixing models and evaluate their performances in the study of oil paintings. We propose an evaluation protocol, based on different features, i.e., spectral reconstruction, pigment mapping, and concentration estimation, which allows investigating the different properties of those mixing models in the context of spectral imaging. We conduct our experiment on oil-painted mockup samples of mixtures and show that models based on subtractive mixing perform the best for those materials. |
BibTeX:
@article{2021bSensors, author = {Grillini, Federico and Thomas, Jean-Baptiste and George, Sony}, title = {Comparison of Imaging Models for Spectral Unmixing in Oil Painting}, journal = {Sensors}, year = {2021}, volume = {21}, number = {7}, url = {http://jbthomas.org/Journals/2021bSensors.pdf}, doi = {10.3390/s21072471} } |
Grillini F, Thomas J-B and George S (2021), "VisNIR pigment mapping and re-rendering of an experimental painting", Journal of the International Colour Association. Vol. 26, pp. 3-10. |
Abstract: Pigment mapping allows the classification and estimation of the abundances of pigments in paintings. The information learned becomes extremely important for conservators, who are then able to decide the best strategies in the conservation of the artefacts. When the goal is to restore a painting, it is also important to know what the effects of the newly introduced materials are. To fulfil this purpose, a proper mixing model must be defined. We propose a framework to perform pigment mapping on the hyperspectral image of an experimental painting realised for the occasion, with the goal of rendering a colour image using the concentrations retrieved from the mapping. Contrarily to spectral unmixing tasks,
where subtractive models prevailed, hybrid models have the advantage of outputting more accurate colours in this workflow. |
BibTeX:
@article{2021bJAIC, author = {Grillini, Federico and Thomas, Jean-Baptiste and George, Sony}, editor = {International Colour Association}, title = {VisNIR pigment mapping and re-rendering of an experimental painting}, journal = {Journal of the International Colour Association}, year = {2021}, volume = {26}, pages = {3--10}, note = {eng}, url = {http://jbthomas.org/Journals/2021bJAIC.pdf} } |
Courtier G, Lapray P-J, Thomas J-B and Farup I (2021), "Correlations in Joint Spectral and Polarization Imaging", Sensors. Vol. 21(1) |
Abstract: Recent imaging techniques enable the joint capture of spectral and polarization image data. In order to permit the design of computational imaging techniques and future processing of this information, it is interesting to describe the related image statistics. In particular, in this article, we present observations for different correlations between spectropolarimetric channels. The analysis is performed on several publicly available databases that are unified for joint processing. We perform global investigation and analysis on several specific clusters of materials or reflection types. We observe that polarization channels generally have more inter-channel correlation than the spectral channels. |
BibTeX:
@article{2021aSensors, author = {Courtier, Guillaume and Lapray, Pierre-Jean and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Correlations in Joint Spectral and Polarization Imaging}, journal = {Sensors}, year = {2021}, volume = {21}, number = {1}, url = {http://jbthomas.org/Journals/2021aSensors.pdf}, doi = {10.3390/s21010006} } |
Tian Y, Mirjalili F and Thomas J-B (2021), "Analysing texture features from individual observer simulations", Journal of the International Colour Association. Vol. 26, pp. 22-29. |
Abstract: We investigated the impact of simulated individual observer colour matching functions (CMFs) on computational texture features. We hypothesised that most humans perceive texture in a similar manner, hence a texture indicator that is the least dependent on individual physiology of human vision would be most likely a potential fit to serve as quantified visually perceived texture. To this end, the following strategy was implemented: hyper-spectral image textures were converted into XYZ images for individual observer CMFs, contrast sensitivity function (CSF) filtering was subsequently applied on the XYZ images for visual simulation. Two types of texture features were extracted from the filtered images. Finally, the difference between the texture features were analysed for observers with disparity in their CMFs. |
BibTeX:
@article{2021aJAIC, author = {Tian, Yuan and Mirjalili, Fereshteh and Thomas, Jean-Baptiste}, editor = {International Colour Association}, title = {Analysing texture features from individual observer simulations}, journal = {Journal of the International Colour Association}, year = {2021}, volume = {26}, pages = {22--29}, note = {eng}, url = {http://jbthomas.org/Journals/2021aJAIC.pdf} } |
Bauer JR, Thomas J-B, Hardeberg JY and Verdaasdonk RM (2019), "An Evaluation Framework for Spectral Filter Array Cameras to Optimize Skin Diagnosis", Sensors. Vol. 19(21) |
Abstract: Comparing and selecting an adequate spectral filter array (SFA) camera is application-specific and usually requires extensive prior measurements. An evaluation framework for SFA cameras is proposed and three cameras are tested in the context of skin analysis. The proposed framework does not require application-specific measurements and spectral sensitivities together with the number of bands are the main focus. An optical model of skin is used to generate a specialized training set to improve spectral reconstruction. The quantitative comparison of the cameras is based on reconstruction of measured skin spectra, colorimetric accuracy, and oxygenation level estimation differences. Specific spectral sensitivity shapes influence the results directly and a 9-channel camera performed best regarding the spectral reconstruction metrics. Sensitivities at key wavelengths influence the performance of oxygenation level estimation the strongest. The proposed framework allows to compare spectral filter array cameras and can guide their application-specific development. |
BibTeX:
@article{2019Sensors, author = {Bauer, Jacob Renzo and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Verdaasdonk, Rudolf M.}, title = {An Evaluation Framework for Spectral Filter Array Cameras to Optimize Skin Diagnosis}, journal = {Sensors}, year = {2019}, volume = {19}, number = {21}, url = {http://jbthomas.org/Journals/2019Sensors.pdf}, doi = {10.3390/s19214805} } |
Khan HA, Thomas J-B, Hardeberg JY and Laligant O (2019), "Multispectral camera as spatio-spectrophotometer under uncontrolled illumination", Opt. Express., Jan, 2019. Vol. 27(2), pp. 1051-1070. OSA. |
Abstract: Multispectral constancy enables the illuminant invariant representation of multi-spectral data. This article proposes an experimental investigation of multispectral constancy through the use of multispectral camera as a spectrophotometer for the reconstruction of surface reflectance. Three images with varying illuminations are captured and the spectra of material surfaces is reconstructed. The acquired images are transformed into canonical representation through the use of diagonal transform and spectral adaptation transform. Experimental results show that use of multispectral constancy is beneficial for both filter-wheel and snapshot multi-spectral cameras. The proposed concept is robust to errors in illuminant estimation and is able to perform well with linear spectral reconstruction method. This work makes us one step closer to the use of multispectral imaging for computer vision. |
BibTeX:
@article{2019OpEx, author = {Haris Ahmad Khan and Jean-Baptiste Thomas and Jon Yngve Hardeberg and Olivier Laligant}, title = {Multispectral camera as spatio-spectrophotometer under uncontrolled illumination}, journal = {Opt. Express}, publisher = {OSA}, year = {2019}, volume = {27}, number = {2}, pages = {1051--1070}, url = {http://jbthomas.org/Journals/2019OpEx.pdf}, doi = {10.1364/OE.27.001051} } |
Khan HA, Mihoubi S, Mathon B, Thomas J-B and Hardeberg JY (2018), "HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral Texture Images", Sensors. Vol. 18(7) |
Abstract: We present a dataset of close range hyperspectral images of materials that span the visible and near infrared spectrums: HyTexiLa (Hyperspectral Texture images acquired in Laboratory). The data is intended to provide high spectral and spatial resolution reflectance images of 112 materials to study spatial and spectral textures. In this paper we discuss the calibration of the data and the method for addressing the distortions during image acquisition. We provide a spectral analysis based on non-negative matrix factorization to quantify the spectral complexity of the samples and extend local binary pattern operators to the hyperspectral texture analysis. The results demonstrate that although the spectral complexity of each of the textures is generally low, increasing the number of bands permits better texture classification, with the opponent band local binary pattern feature giving the best performance. |
BibTeX:
@article{2018Sensors, author = {Khan, Haris Ahmad and Mihoubi, Sofiane and Mathon, Benjamin and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve}, title = {HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral Texture Images}, journal = {Sensors}, year = {2018}, volume = {18}, number = {7}, url = {http://jbthomas.org/Journals/2018Sensors.pdf}, doi = {10.3390/s18072045} } |
El Khoury J, Le Moan S, Thomas J-B and Mansouri A (2018), "Color and sharpness assessment of single image dehazing", Multimedia Tools and Applications., June, 2018. Vol. 77, pp. 15409–15430. |
Abstract: Image dehazing is the process of enhancing a color image of a natural scene that contains an undesirable veil of fog for visualization or as a pre-processing step for computer vision systems. In this work, we investigate the performances of eleven state-of-the-art image quality metrics in evaluating dehazed images, and discuss challenges in designing an efficient dehazing evaluation metric. This is done through a composite study based on the agreement between subjective and objective evaluations. Accordingly, we evaluate five state-of-the-art dehazing algorithms. We use two semi-indoor scenes, degraded with several levels of fog. One important aspect of these scenes is that the fog-free images are available and can therefore serve as ground-truth data for dehazing methods evaluation. This study shows that the best working dehazing method depends on the density of fog. There seems to be a clear distinction between what people perceive as good quality in terms of color restoration and in terms of sharpness restoration. Most metrics show limitations in providing proper quality prediction of dehazing. According to the introduction and analysis, a contribution of this work is to point out the flaws in the evaluation and development of dehazing methods. Our observations might be considered when designing efficient methods and metrics dedicated to image dehazing. |
BibTeX:
@article{2018MTAP, author = {El Khoury, Jessica and Le Moan, Steven and Thomas, Jean-Baptiste and Mansouri, Alamin}, title = {Color and sharpness assessment of single image dehazing}, journal = {Multimedia Tools and Applications}, year = {2018}, volume = {77}, pages = {15409–15430}, url = {http://jbthomas.org/Journals/2018MTAP.pdf}, doi = {10.1007/s11042-017-5122-y} } |
Thomas J-B and Farup I (2018), "Demosaicing of Periodic and Random Color Filter Arrays by Linear Anisotropic Diffusion", Journal of Imaging Science and Technology. Vol. 62(5), pp. 50401-1-50401-8. |
Abstract: The authors develop several versions of the diffusion equation to demosaic color filter arrays of any kind. In particular, they compare isotropic versus anisotropic and linear versus non-linear formulations. Using these algorithms, they investigate the effect of mosaics on the resulting demosaiced images. They perform cross analysis on images, mosaics, and algorithms. They find that random mosaics do not perform the best with their algorithms, but rather pseudo-random mosaics give the best results. The Bayer mosaic also shows equivalent results to good pseudo-random mosaics in terms of peak signal-to-noise ratio but causes visual aliasing artifacts. The linear anisotropic diffusion method performs the best of the diffusion versions, at the level of state-of-the-art algorithms. |
BibTeX:
@article{2018cJIST, author = {Thomas, Jean-Baptiste and Farup, Ivar}, title = {Demosaicing of Periodic and Random Color Filter Arrays by Linear Anisotropic Diffusion}, journal = {Journal of Imaging Science and Technology}, year = {2018}, volume = {62}, number = {5}, pages = {50401-1-50401-8}, url = {http://jbthomas.org/Journals/2018cJIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2018.62.5.050401} } |
Khan HA, Thomas J-B, Hardeberg JY and Laligant O (2018), "Spectral Adaptation Transform for Multispectral Constancy", Journal of Imaging Science and Technology. Vol. 62(2), pp. 20504-1-20504-12. |
Abstract: The spectral reflectance of an object surface provides valuable information of its characteristics. Reflectance reconstruction from multispectral image data is typically based on certain assumptions. One of the common assumptions is that the same illumination is used for system calibration and image acquisition. The authors propose the concept of multispectral constancy which transforms the captured sensor data into an illuminant-independent representation, analogously to the concept of computational color constancy. They propose to transform the multispectral image data to a canonical representation through spectral adaptation transform (SAT). The performance of such a transform is tested on measured reflectance spectra and hyperspectral reflectance images. The authors also investigate the robustness of the transform to the inaccuracy of illuminant estimation in natural scenes. Results of reflectance reconstruction show that the proposed SAT is efficient and is robust to error in illuminant estimation. |
BibTeX:
@article{2018bJIST, author = {Khan, Haris Ahmad and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Laligant, Olivier}, title = {Spectral Adaptation Transform for Multispectral Constancy}, journal = {Journal of Imaging Science and Technology}, year = {2018}, volume = {62}, number = {2}, pages = {20504-1-20504-12}, url = {http://jbthomas.org/Journals/2018bJIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2018.62.2.020504} } |
El Khoury J, Thomas J-B and Mansouri A (2018), "A Database with Reference for Image Dehazing Evaluation", Journal of Imaging Science and Technology. Vol. 62(1), pp. 10503-1-10503-13. |
Abstract: In this article, the authors introduce a new color image database, CHIC (Color Hazy Images for Comparison), devoted to haze model assessment and dehazing method evaluation. For three real scenes, they provide two illumination conditions and several densities of real fog. The main interest lies in the availability of several metadata parameters such as the distance from the camera to the objects in the scene, the image radiance and the fog density through fog transmittance. For each scene, the fog-free (ground-truth) image is also available, which allows an objective comparison of the resulting image enhancement and potential shortcomings of the model. Five different dehazing methods are benchmarked on three intermediate levels of fog using existing image quality assessment (IQA) metrics with reference to the provided fog-free image. This provides a basis for the evaluation of dehazing methods across fog densities as well as the effectiveness of existing dehazing dedicated IQA metrics. The results indicate that more attention should be given to dehazing methods and the evaluation of metrics to meet an optimal level of image quality. This database and its description are freely available at the web address http://chic.u-bourgogne.fr. |
BibTeX:
@article{2018aJIST, author = {El Khoury, Jessica and Thomas, Jean-Baptiste and Mansouri, Alamin}, title = {A Database with Reference for Image Dehazing Evaluation}, journal = {Journal of Imaging Science and Technology}, year = {2018}, volume = {62}, number = {1}, pages = {10503-1-10503-13}, url = {http://jbthomas.org/Journals/2018aJIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2018.62.1.010503} } |
Lapray P-J, Thomas J-B and Gouton P (2017), "High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras", Sensors. Vol. 17(6), pp. 1281. |
Abstract: Spectral filter arrays imaging exhibits a strong similarity with color filter arrays. This permits us to embed this technology in practical vision systems with little adaptation of the existing solutions. In this communication, we define an imaging pipeline that permits high dynamic range (HDR)-spectral imaging, which is extended from color filter arrays. We propose an implementation of this pipeline on a prototype sensor and evaluate the quality of our implementation results on real data with objective metrics and visual examples. We demonstrate that we reduce noise, and, in particular we solve the problem of noise generated by the lack of energy balance. Data are provided to the community in an image database for further research. |
BibTeX:
@article{2017Sensors, author = {Lapray, Pierre-Jean and Thomas, Jean-Baptiste and Gouton, Pierre}, title = {High Dynamic Range Spectral Imaging Pipeline For Multispectral Filter Array Cameras}, journal = {Sensors}, year = {2017}, volume = {17}, number = {6}, pages = {1281}, url = {http://jbthomas.org/Journals/2017Sensors.pdf}, doi = {10.3390/s17061281} } |
Khan HA, Thomas J-B, Hardeberg JY and Laligant O (2017), "Illuminant estimation in multispectral imaging", J. Opt. Soc. Am. A., Jul, 2017. Vol. 34(7), pp. 1085-1098. OSA. |
Abstract: With the advancement in sensor technology, the use of multispectral imaging is gaining wide popularity for computer vision applications. Multispectral imaging is used to achieve better discrimination between the radiance spectra, as compared to the color images. However, it is still sensitive to illumination changes. This study evaluates the potential evolution of illuminant estimation models from color to multispectral imaging. We first present a state of the art on computational color constancy and then extend a set of algorithms to use them in multispectral imaging. We investigate the influence of camera spectral sensitivities and the number of channels. Experiments are performed on simulations over hyperspectral data. The outcomes indicate that extension of computational color constancy algorithms from color to spectral gives promising results and may have the potential to lead towards efficient and stable representation across illuminants. However, this is highly dependent on spectral sensitivities and noise. We believe that the development of illuminant invariant multispectral imaging systems will be a key enabler for further use of this technology. |
BibTeX:
@article{2017JOSAA, author = {Haris Ahmad Khan and Jean-Baptiste Thomas and Jon Yngve Hardeberg and Olivier Laligant}, title = {Illuminant estimation in multispectral imaging}, journal = {J. Opt. Soc. Am. A}, publisher = {OSA}, year = {2017}, volume = {34}, number = {7}, pages = {1085--1098}, url = {http://jbthomas.org/Journals/2017JOSAA.pdf}, doi = {10.1364/JOSAA.34.001085} } |
Lapray P-J, Thomas J-B, Gouton P and Ruichek Y (2017), "Energy balance in Spectral Filter Array camera design", Journal of the European Optical Society-Rapid Publications., jan, 2017. Vol. 13(1) Springer Nature. |
BibTeX:
@article{2017JEOS, author = {Pierre-Jean Lapray and Jean-Baptiste Thomas and Pierre Gouton and Yassine Ruichek}, title = {Energy balance in Spectral Filter Array camera design}, journal = {Journal of the European Optical Society-Rapid Publications}, publisher = {Springer Nature}, year = {2017}, volume = {13}, number = {1}, url = {http://jbthomas.org/Journals/2017JEOS.pdf}, doi = {10.1186/s41476-016-0031-7} } |
Amba P, Thomas JB and Alleysson D (2017), "N-LMMSE Demosaicing for Spectral Filter Arrays", Journal of Imaging Science and Technology. Vol. 61(4), pp. 40407-1-40407-11. |
Abstract: Spectral filter array (SFA) technology requires development on demosaicing. The authors extend the linear minimum mean square error with neighborhood method to the spectral dimension. They demonstrate that the method is fast and general on Raw SFA images that span the visible and near infra-red part of the electromagnetic range. The method is quantitatively evaluated in simulation first, then the authors evaluate it on real data by the use of non-reference image quality metrics applied on each band. Resulting images show a much better reconstruction of text and high frequencies at the expense of a zipping effect, compared to the benchmark binary-tree method. |
BibTeX:
@article{2017bJIST, author = {Amba, Prakhar and Thomas, Jean Baptiste and Alleysson, David}, title = {N-LMMSE Demosaicing for Spectral Filter Arrays}, journal = {Journal of Imaging Science and Technology}, year = {2017}, volume = {61}, number = {4}, pages = {40407-1-40407-11}, url = {http://jbthomas.org/Journals/2017bJIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2017.61.4.040407} } |
de Dravo VW, Khoury JE, Thomas JB, Mansouri A and Hardeberg JY (2017), "An Adaptive Combination of Dark and Bright Channel Priors for Single Image Dehazing", Journal of Imaging Science and Technology. Vol. 2017(25), pp. 226-234. |
Abstract: Dehazing methods based on prior assumptions derived from statistical image properties fail when these properties do not hold. This is most likely to happen when the scene contains large bright areas, such as snow and sky, due to the ambiguity between the airlight and the depth information. This is the case for the popular dehazing method Dark Channel Prior. In order to improve its performance, the authors propose to combine it with the recent multiscale STRESS, which serves to estimate Bright Channel Prior. Visual and quantitative evaluations show that this method outperforms Dark Channel Prior and competes with the most robust dehazing methods, since it separates bright and dark areas and therefore reduces the color cast in very bright regions. textcopyright 2017 Society for Imaging Science and Technology. |
BibTeX:
@article{2017aJIST, author = {de Dravo, Vincent Whannou and Khoury, Jessica El and Thomas, Jean Baptiste and Mansouri, Alamin and Hardeberg, Jon Yngve}, title = {An Adaptive Combination of Dark and Bright Channel Priors for Single Image Dehazing}, journal = {Journal of Imaging Science and Technology}, year = {2017}, volume = {2017}, number = {25}, pages = {226-234}, url = {http://jbthomas.org/Journals/2017aJIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2017.61.4.040408} } |
Thomas J-B, Lapray P-J, Gouton P and Clerc C (2016), "Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition", Sensors. Vol. 16(7), pp. 993. |
Abstract: Multispectral acquisition improves machine vision since it permits capturing more information on object surface properties than color imaging. The concept of spectral filter arrays has been developed recently and allows multispectral single shot acquisition with a compact camera design. Due to filter manufacturing difficulties, there was, up to recently, no system available for a large span of spectrum, i.e., visible and Near Infra-Red acquisition. This article presents the achievement of a prototype of camera that captures seven visible and one near infra-red bands on the same sensor chip. A calibration is proposed to characterize the sensor, and images are captured. Data are provided as supplementary material for further analysis and simulations. This opens a new range of applications in security, robotics, automotive and medical fields. |
BibTeX:
@article{2016Sensors, author = {Thomas, Jean-Baptiste and Lapray, Pierre-Jean and Gouton, Pierre and Clerc, Cédric}, title = {Spectral Characterization of a Prototype SFA Camera for Joint Visible and NIR Acquisition}, journal = {Sensors}, year = {2016}, volume = {16}, number = {7}, pages = {993}, url = {http://jbthomas.org/Journals/2016Sensors.pdf}, doi = {10.3390/s16070993} } |
Pedersen M, Suazo D and Thomas J-B (2016), "Seam-Based Edge Blending for Multi-Projection Systems", International Journal of Signal Processing, Image Processing and Pattern Recognition. Vol. 9(4), pp. 11-26. |
Abstract: Abstract
Perceptual seamlessness of large-scale tiled displays is still a challenge. One way to avoid Bezel effects from contiguous displays is to blend superimposed parts of the image over the edges. This work proposes a new approach for edge blending. It is based on intensity edge blending adapted on the seam description of the image content. The main advantage of this method is to reduce visual artifacts thanks to context adaptation and smooth transitions. We evaluate the quality of the method with a perceptual experiment where it is compared with state-of-the-art methods. The new method shows most improvement in low frequency areas compared to the other techniques. This method can be inserted into any multi-projector system that already applies edge blending. |
BibTeX:
@article{2016IJISP, author = {Pedersen, Marius and Suazo, Daniel and Thomas, Jean-Baptiste}, title = {Seam-Based Edge Blending for Multi-Projection Systems}, journal = {International Journal of Signal Processing, Image Processing and Pattern Recognition}, year = {2016}, volume = {9}, number = {4}, pages = {11-26}, url = {http://jbthomas.org/Journals/2016IJSIP.pdf}, doi = {10.14257/ijsip.2016.9.4.02} } |
Colantoni P, Thomas J-B and Trémeau A (2016), "Sampling CIELAB color space with perceptual metrics", International Journal of Imaging and Robotics. Vol. 16(3), pp. xx-xx. |
Abstract: Abstract
Sampling a perceptually uniform or pseudo-uniform color space is required for applications from image processing to computational imaging. However, one can face two problems while trying to perform a uniform sampling of such space. First, the usual cubic grid is not perceptually uniform in most cases. Second, perceptual metrics are often not Euclidean. We propose to overcome these problems. We apply our solution on CIELAB color space to test its efficiency. We propose an algorithm to define a tabulated color space with regard to a non-Euclidean color difference formula, i.e. DE00 in CIELAB. The tabulated data are available at http://data.couleur.org/deltaE/. Later, we propose to combine this tabulated color space with an approximated 3D close packed hexagonal regular sampling of CIELAB. Evaluations of the transform and of the regular sampling are performed and compared with literature standards. |
BibTeX:
@article{2016IJIR, author = {Colantoni, Philippe and Thomas, Jean-Baptiste and Trémeau, Alain}, title = {Sampling CIELAB color space with perceptual metrics}, journal = {International Journal of Imaging and Robotics}, year = {2016}, volume = {16}, number = {3}, pages = {xx-xx}, url = {http://jbthomas.org/Journals/2016IJIR.pdf} } |
Zhao P, Pedersen M, Hardeberg JY and Thomas J-B (2015), "Measuring the Relative Image Contrast of Projection Displays", Journal of Imaging Science and Technology. Vol. 59(3), pp. 30404-1-30404-13. |
Abstract: Projection displays, compared to other modern display technologies, have many unique advantages. However, the image quality assessment of projection displays has not been well studied so far. In this paper, we propose an objective approach to measure the relative contrast of projection displays based on the pictures taken with a calibrated digital camera in a dark room where the projector is the only light source. A set of carefully selected natural images is modified to generate multiple levels of image contrast. In order to enhance the validity, reliability, and robustness of our research, we performed the experiments in similar viewing conditions at two separate geographical locations with different projection displays. In each location, we had a group of observers to give perceptual ratings. Further, we adopted state-of-art contrast measures to evaluate the relative contrast of the acquired images. The experimental results suggest that the Michelson contrast measure performs the worst, as expected, while other global contrast measures perform relatively better, but they have less correlation with the perceptual ratings than local contrast measures. The local contrast measures perform better than global contrast measures for all test images, but all contrast measures failed on the test images with low luminance or dominant colors and without texture areas. In addition, the high correlations between the experimental results for the two projections displays indicate that our proposed assessment approach is valid, reliable, and consistent. |
BibTeX:
@article{2015JIST, author = {Zhao, Ping and Pedersen, Marius and Hardeberg, Jon Yngve and Thomas, Jean-Baptiste}, title = {Measuring the Relative Image Contrast of Projection Displays}, journal = {Journal of Imaging Science and Technology}, year = {2015}, volume = {59}, number = {3}, pages = {30404-1-30404-13}, url = {http://jbthomas.org/Journals/2015JIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2015.59.3.030404} } |
Lapray P-J, Wang X, Thomas J-B and Gouton P (2014), "Multispectral Filter Arrays: Recent Advances and Practical Implementation", Sensors. Vol. 14(11), pp. 21626. |
Abstract: Thanks to some technical progress in interferencefilter design based on different technologies, we can finally successfully implement the concept of multispectral filter array-based sensors. This article provides the relevant state-of-the-art for multispectral imaging systems and presents the characteristics of the elements of our multispectral sensor as a case study. The spectral characteristics are based on two different spatial arrangements that distribute eight different bandpass filters in the visible and near-infrared area of the spectrum. We demonstrate that the system is viable and evaluate its performance through sensor spectral simulation. |
BibTeX:
@article{2014Sensors, author = {Lapray, Pierre-Jean and Wang, Xingbo and Thomas, Jean-Baptiste and Gouton, Pierre}, title = {Multispectral Filter Arrays: Recent Advances and Practical Implementation}, journal = {Sensors}, year = {2014}, volume = {14}, number = {11}, pages = {21626}, url = {http://jbthomas.org/Journals/2014Sensors.pdf}, doi = {10.3390/s141121626} } |
Wang X, Thomas J-B, Hardeberg JY and Gouton P (2014), "Multispectral imaging: narrow or wide band filters?", Journal of the International Colour Association. Vol. 12, pp. 44-51. |
Abstract: In every aspect, spectral characteristics of filters play an important role in an image acquisition system. For a colorimetric system, traditionally, it is believed that narrow-band filters give rise to higher accuracy of colour reproduction, whereas wide-band filters, such as complementary colour filters, have the advantage of higher sensitivity. In the context of multispectral image capture, the objective is very often to retrieve an estimation of the spectral reflectance of the captured objects. The literature does not provide a satisfactory answer to which configuration yields the best results. It is therefore of interest to verify which type of filters performs the best in estimating the reflectance spectra for the purpose of multispectral image acquisition. A series of experiments were conducted on a simulated imaging system, with six types of filters of varying bandwidths paired with three linear reflectance estimation methods. The results show that filter bandwidth exerts direct influence on the accuracy of reflectance estimation. Extremely narrowband filters did not perform well in the experiment and the relation between bandwidth and reflectance estimation accuracy is not monotonic. Also it is indicated that the optimal number of filters depends on the spectral similarity metrics employed. |
BibTeX:
@article{2014JAIC, author = {Wang, Xingbo and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Gouton, Pierre}, editor = {International Colour Association}, title = {Multispectral imaging: narrow or wide band filters?}, journal = {Journal of the International Colour Association}, year = {2014}, volume = {12}, pages = {44--51}, note = {eng}, url = {http://jbthomas.org/Journals/2014JAIC.pdf} } |
Colantoni P, Thomas J-B and Hardeberg JY (2011), "High-end colorimetric display characterization using an adaptive training set", Journal of the Society for Information Display. Vol. 19(8), pp. 520-530. Blackwell Publishing Ltd. |
Abstract: A new, accurate, and technology-independent display color-characterization model is introduced. It is based on polyharmonic spline interpolation and on an optimized adaptive training data set. The establishment of this model is fully automatic and requires only a few minutes, making it efficient in a practical situation. The experimental results are very good for both the forward and inverse models. Typically, the proposed model yields an average model prediction error of about 1 ΔEab* unit or below for several displays. The maximum error is shown to be low as well. |
BibTeX:
@article{2011JSID, author = {Colantoni, Philippe and Thomas, Jean-Baptiste and Hardeberg, Jon Y.}, title = {High-end colorimetric display characterization using an adaptive training set}, journal = {Journal of the Society for Information Display}, publisher = {Blackwell Publishing Ltd}, year = {2011}, volume = {19}, number = {8}, pages = {520--530}, url = {http://jbthomas.org/Journals/2011JSID.pdf}, doi = {10.1889/JSID19.8.520} } |
Thomas J-B, Bakke A and Gerhardt J (2010), "Spatial Nonuniformity of Color Features in Projection Displays: A Quantitative Analysis", Journal of Imaging Science and Technology. Vol. 54(3), pp. 30403-1-30403-13. |
Abstract: In this article, the authors investigate and study the color spatial uniformity of projectors. A common assumption in previous works is to consider that only the luminance is varying along the spatial dimensions. The authors show that the chromaticity plays a significant role in the spatial color shift and should not be disregarded depending on the application. The authors base their conclusions on the measurements obtained from three projectors. First, two methods are used to analyze the spatial properties of the projectors, a conventional approach, and a new one that considers three-dimensional gamut differences. The results show that the color gamut difference between two spatial coordinates within the same display can be larger than the difference observed between two projectors. In a second part, the authors focus on the evaluation of assumptions commonly made in projector color characterization. The authors investigate if these assumptions are still valid along the spatial dimensions. Features studied include normalized response curve, chromaticity constancy of primaries, and channel independence. Some features seem to vary noticeably spatially, such as the normalized response curve. Some others appear to be quite invariant, such as the channel independence. |
BibTeX:
@article{2010JIST, author = {Thomas, Jean-Baptiste and Bakke, Arne and Gerhardt, Jeremie}, title = {Spatial Nonuniformity of Color Features in Projection Displays: A Quantitative Analysis}, journal = {Journal of Imaging Science and Technology}, year = {2010}, volume = {54}, number = {3}, pages = {30403-1-30403-13}, url = {http://jbthomas.org/Journals/2010JIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2010.54.3.030403} } |
Thomas J-B, Colantoni P, Hardeberg JY, Foucherot I and Gouton P (2008), "A geometrical approach for inverting display color-characterization models", Journal of the Society for Information Display. Vol. 16(10), pp. 1021-1031. Blackwell Publishing Ltd. |
Abstract: Some display color-characterization models are not easily inverted. This work proposes ways to build geometrical inverse models given any forward color-characterization model. The main contribution is to propose and analyze several methods to optimize the 3-D geometrical structure of an inverse color-characterization model directly based on the forward model. Both the amount of data and their distribution in color space is especially focused on. Several optimization criteria, related either to an evaluation data set or to the geometrical structure itself, are considered. A practical case with several display devices, combining the different methods proposed in the article, are considered and analyzed. |
BibTeX:
@article{2008JSID, author = {Thomas, Jean-Baptiste and Colantoni, Philippe and Hardeberg, Jon Y. and Foucherot, Ire`ne and Gouton, Pierre}, title = {A geometrical approach for inverting display color-characterization models}, journal = {Journal of the Society for Information Display}, publisher = {Blackwell Publishing Ltd}, year = {2008}, volume = {16}, number = {10}, pages = {1021--1031}, url = {http://jbthomas.org/Journals/2008JSID.pdf}, doi = {10.1889/JSID16.10.1021} } |
Thomas J-B, Hardeberg JY, Foucherot I and Gouton P (2008), "The PLVC display color characterization model revisited", Color Research & Application. Vol. 33(6), pp. 449-460. Wiley Subscription Services, Inc., A Wiley Company. |
Abstract: This work proposes a study of the Piecewise Linear assuming Variation in Chromaticity (PLVC) display color characterization model. This model has not been widely used as the improved accuracy compared with the more common PLCC (Piecewise Linear assuming Chromaticity Constancy) model is not significant for CRT (Cathode Ray Tube) display technology, and it requires more computing power than this model. With today's computers, computational complexity is less of a problem, and today's display technologies show a different colorimetric behavior than CRTs. The main contribution of this work is to generalize the PLVC model to multiprimary displays and to provide extensive experimental results and analysis for today's display technologies. We confirm and extend the results found in the literature and compare this model with classical PLCC and Gain-Offset-Gamma-Offset models. We show that using this model is highly beneficial for Liquid Crystal Displays, reducing the average error about a third for the two tested LCD projectors compared with a black corrected PLCC model, from 3.93 and 1.78 to respectively 1.41 and 0.54 ΔE *ab units. © 2008 Wiley Periodicals, Inc. Col Res Appl, 33, 449–460, 2008 |
BibTeX:
@article{2008CRA, author = {Thomas, Jean-Baptiste and Hardeberg, Jon Y. and Foucherot, Irène and Gouton, Pierre}, title = {The PLVC display color characterization model revisited}, journal = {Color Research & Application}, publisher = {Wiley Subscription Services, Inc., A Wiley Company}, year = {2008}, volume = {33}, number = {6}, pages = {449--460}, url = {http://jbthomas.org/Journals/2008CRA.pdf}, doi = {10.1002/col.20447} } |
Fsian AN, Thomas J-B, Hardeberg JY and Gouton P (2025), "Stitching from Spectral Filter Array Video Sequences", In Computational Color Imaging. Cham , pp. 132-146. Springer Nature Switzerland. |
Abstract: Hyperspectral imaging offers high spectral and spatial resolution, but its high costs and time-consuming nature make it difficult to use. Spectral Filter Array (SFA) imaging presents an alternative, offering high spectral resolution, user-friendliness, and affordability, but at the cost of limited spatial resolution. This paper presents an approach to address this trade-off, starting with raw overlapping frames from spectral videos, followed by a demosaicking network process before tackling the stitching problem. Our experiments on various spectral videos, supported by image quality metrics and qualitative demonstrations, indicate that this approach effectively enhances the spatial resolution of spectral images while reducing artifacts. The integration of the demosaicking and the stitching provides a robust solution for spectral video applications, paving the way for further advancements in panoramic spectral image stitching. |
BibTeX:
@inproceedings{CCIW2024, author = {Fsian, Abdelhamid N. and Thomas, Jean-Baptiste and Hardeberg, Jon Y. and Gouton, Pierre}, editor = {Schettini, Raimondo and Trémeau, Alain and Tominaga, Shoji and Bianco, Simone and Buzzelli, Marco}, title = {Stitching from Spectral Filter Array Video Sequences}, booktitle = {Computational Color Imaging}, publisher = {Springer Nature Switzerland}, year = {2025}, pages = {132--146}, url = {https://jbthomas.org/Conferences/2024CCIW.pdf} } |
Fsian AN, Thomas J-B, Hardeberg JY and Gouton P (2023), "Bayesian Multispectral Videos Super Resolution", In 2023 11th European Workshop on Visual Information Processing (EUVIP)., Sep., 2023. , pp. 1-6. |
Abstract: Due to hardware limitations, multispectral videos often exhibit significantly lower resolution compared to standard color videos. These videos capture images in multiple bands of the electromagnetic spectrum, providing valuable additional information that is not available in traditional RGB images. This paper proposes a Bayesian approach to estimate super resolved images from low-resolution spectral videos. We consider adjacent frames from a video sequence to provide a super-resolution image at a time. We include in our proposal the motion between adjacent frames and unlikely to the literature, we estimate the blur and noise while reconstructing the higher resolution image. Experimental results on spectral videos demonstrate the effectiveness of our approach in producing high-quality super resolved images. |
BibTeX:
@inproceedings{EUVIP2023, author = {Fsian, Abdelhamid N. and Thomas, Jean-Baptiste and Hardeberg, Jon Y. and Gouton, Pierre}, title = {Bayesian Multispectral Videos Super Resolution}, booktitle = {2023 11th European Workshop on Visual Information Processing (EUVIP)}, year = {2023}, pages = {1-6}, url = {https://jbthomas.org/Conferences/2023EUVIP.pdf}, doi = {10.1109/EUVIP58404.2023.10323068} } |
Thomas J-B, Lapray P-J, Derhak M and Farup I (2023), "Standard Representation Space for Spectral Imaging", Color and Imaging Conference. Vol. 31(1), pp. 187-187. |
Abstract: Abstract
The variety of spectral imaging systems makes the portability of imaging solutions and the generalization of research difficult. We advocate for the creation of a standard representation space for spectral imaging. We propose a space that allows connection to colorimetric standards and to spectral reflectance factors, while keeping a low and practical dimension. The performance of one instance of this standard is evaluated through simulations. Results demonstrate that this space may show reduced performance in accuracy than some native camera spaces, especially instances with a number of bands larger than the standardized dimension, but this limitation comes with benefit in size and standardization. |
BibTeX:
@inproceedings{CIC2023, author = {Jean-Baptiste Thomas and Pierre-Jean Lapray and Max Derhak and Ivar Farup}, title = {Standard Representation Space for Spectral Imaging}, journal = {Color and Imaging Conference}, year = {2023}, volume = {31}, number = {1}, pages = {187--187}, url = {https://jbthomas.org/Conferences/2023CIC.pdf}, doi = {10.2352/CIC.2023.31.1.35} } |
Grillini F, Thomas J-B and George S (2023), "Full VNIR-SWIR Hyperspectral Imaging Workflow for the Monitoring of Archaeological Textiles", Archiving Conference. Vol. 20(1), pp. 192-192. |
Abstract: Abstract
A practical workflow to capture and process hyperspectral images in combined VNIR-SWIR ranges is presented and discussed. The pipeline demonstration is intended to increase the visibility of the possibilities that advanced hyperspectral imaging techniques can bring to the study of archaeological textiles. Emphasis is placed on the fusion of data from two hyperspectral devices. Every aspect of the pipeline is analyzed, from the practical and optimal implementation of the imaging setup to the choices and decisions that can be made during the data processing steps. The workflow is demonstrated on an archaeological textile belonging to the Paracas Culture (Peru, 200 BC - 100 AD ca.) and displays an example in which an inappropriate selection of the processing steps can lead to a misinterpretation of the hyperspectral data. |
BibTeX:
@article{ARCHIVING2023, author = {Federico Grillini and Jean-Baptiste Thomas and Sony George}, title = {Full VNIR-SWIR Hyperspectral Imaging Workflow for the Monitoring of Archaeological Textiles}, journal = {Archiving Conference}, year = {2023}, volume = {20}, number = {1}, pages = {192--192}, url = {https://jbthomas.org/Conferences/2023ARCHIVING.pdf}, doi = {10.2352/issn.2168-3204.2023.20.1.39} } |
Dumoulin R, Lapray P-J, Thomas J-B and Farup I (2022), "Impact of training data on LMMSE demosaicing for Colour-Polarization Filter Array", In 2022 16th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)., Oct, 2022. , pp. 275-280. |
Abstract: Linear minimum mean square error can be used to demosaic images from a colour-polarization filter array sensor. However, the role of training data on its performance is yet an open question. We study the model selection using crossvalidation techniques. The results show that the training model converges quickly, and that there is no significant difference in training the model with more than 12 images of approximately 1.5 megapixels. We also found that the selected trained model performs better compared to a dedicated Colour-Polarization Filter Array demosaicing algorithm in terms of Peak Signal-to-Noise Ratio. |
BibTeX:
@inproceedings{SITIS2022, author = {Dumoulin, Ronan and Lapray, Pierre-Jean and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Impact of training data on LMMSE demosaicing for Colour-Polarization Filter Array}, booktitle = {2022 16th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS)}, year = {2022}, pages = {275-280}, url = {http://jbthomas.org/Conferences/2022SITIS.pdf}, doi = {10.1109/SITIS57111.2022.00031} } |
Russo S, Brambilla L, Thomas J-B and Joseph E (2022), "Revealing degradation patterns: Imaging Techniques for the study of metal soap formation on painted metal objects", In Metal 2022, proceedings of the interim meeting of the ICOM-CC metals working group. , pp. 1-5. ICOM-CC; The National Museum of Finland. |
Abstract: In attempting to document the degradation processes occurring on cultural heritage objects, imaging-based analytical techniques present many advantages, as they provide spatial and spectral information and allow the simultaneous investigation of the chemical and morphological characteristics of a sample. This study presents a protocol based on 2D chemical imaging - Fourier transform infrared microspectroscopy (µ-FTIR) and hyperspectral imaging (HSI) - aimed at monitoring the formation of metal soaps on model metal coupons. Oil-painted metal supports are in fact not immune to degradation due to metal soaps formation, a phenomenon that affects all oil-painted surfaces from the initial curing of the paint film. Copper and zinc sheets were coated with cold-pressed linseed oil and artificially aged for one month in order to instigate the formation of metal soaps. Their reaction was then monitored by means of µ-FTIR. The chemical maps showed an increasing trend over time, elucidating some aspects and differences in the mechanism of formation of the organic salts for the two metal substrates. Additionnally, the samples were analysed using two hyperspectral cameras, operating in the visible-near infrared and short-wave infrared spectral range. The appropriateness of the two cameras in the investigation of metal soaps, and the effect of the thickness of the coating on the data obtained, is discussed here. |
BibTeX:
@inproceedings{ICOM2022, author = {Russo, Silvia and Brambilla, Laura and Thomas, Jean-Baptiste and Joseph, Edith}, title = {Revealing degradation patterns: Imaging Techniques for the study of metal soap formation on painted metal objects}, booktitle = {Metal 2022, proceedings of the interim meeting of the ICOM-CC metals working group}, publisher = {ICOM-CC; The National Museum of Finland}, year = {2022}, pages = {1-5}, url = {http://jbthomas.org/Conferences/2022ICOM.pdf} } |
Grillini F, Thomas J-B and George S (2022), "Hyperspectral Vnir - Swir Image Registration: Do Not Throw Away Those Overlapping Low Snr Bands", In 2022 12th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS). , pp. 1-5. IEEE. |
BibTeX:
@inproceedings{2022WHISPERS, author = {Grillini, Federico and Thomas, Jean-Baptiste and George, Sony}, title = {Hyperspectral Vnir - Swir Image Registration: Do Not Throw Away Those Overlapping Low Snr Bands}, booktitle = {2022 12th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS)}, publisher = {IEEE}, year = {2022}, pages = {1-5}, url = {http://jbthomas.org/Conferences/2022WHISPERS.pdf}, doi = {10.1109/WHISPERS56178.2022.9955080} } |
Bozorgian A, Pedersen M and Thomas J-B (2022), "The Effect of Peripheral Contrast Sensitivity Functions on the Performance of the Foveated Wavelet Image Quality Index", In Proc. IS&T London Imaging Meeting. , pp. 6 - 10. IS&T. |
Abstract: The Contrast Sensitivity Function (CSF) is an integral part of objective foveated image/video quality assessment metrics. In this paper, we investigate the effect of a new eccentricity-dependent CSF model on the performance of the foveated wavelet image quality index (FWQI). Our results do not show a considerable change in FWQI performance when it is evaluated against the LIVE-FBT-FCVR 2D dataset. We argue that the resolution of the head-mounted display used in the subjective experiment limits our ability to reveal the anticipated effect of the new CSF on FWQI performance. |
BibTeX:
@inproceedings{2022LIM, author = {Bozorgian, Aliakabar and Pedersen, Marius and Thomas, Jean-Baptiste}, title = {The Effect of Peripheral Contrast Sensitivity Functions on the Performance of the Foveated Wavelet Image Quality Index}, booktitle = {Proc. IS&T London Imaging Meeting}, publisher = {IS&T}, year = {2022}, pages = {6 -- 10}, url = {http://jbthomas.org/Conferences/2022LIM.pdf}, doi = {10.2352/lim.2022.1.1.03} } |
Ansari-asl M, Thomas J-B and Hardeberg JY (2022), "Camera response function assessment in multispectral HDR imaging", In Proc. IS&T Int’l. Symp. on Electronic Imaging: Color Imaging: Displaying, Processing, Hardcopy, and Applications. Vol. 34, pp. 141-1 - 141-6. IS&T. |
Abstract: Recently, spatially varying Bidirectional Reflectance Distribution Functions (svBRDF) is widely used as a model to characterize the appearance of materials with varying visual properties over the surface. One of the challenges in image-based svBRDF capture systems rises for surfaces with high specularity and sparkles, which require a dynamic range higher than the dynamic range of cameras. High Dynamic Range Imaging (HDRI) for svBRDF systems with multispectral camera has not been addressed properly in the literature. In HDRI, Camera Response Function (CRF) plays a crucial role in the precision of results specially when measuring metrological data such as spectral svBRDF. In this work, we investigate the effect of CRF assessment on the precision of measurement. Therefore, we have conducted two experiments to measure absolute CRF using reflective chart method as well as estimate relative CRF by Debevec and Malik’s method for a filter wheel multispectral camera to be used in a svBRDF setup. Results are evaluated on two levels: radiance map construction and reflectance calculation, by comparing to telespectroradiometer measurements as ground truth data. Results showed that although the HDRI with measured absolute CRF outputs radiance measurements with the same physical units and in the same scale as ground truth data, HDRI with estimated relative CRF outperformed in terms of the precision of reflectance measurement. |
BibTeX:
@inproceedings{2022EI, author = {Ansari-asl, Majid and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve}, title = {Camera response function assessment in multispectral HDR imaging}, booktitle = {Proc. IS&T Int’l. Symp. on Electronic Imaging: Color Imaging: Displaying, Processing, Hardcopy, and Applications}, publisher = {IS&T}, year = {2022}, volume = {34}, pages = {141-1 -- 141-6}, url = {http://jbthomas.org/Conferences/2022EI.pdf}, doi = {10.2352/EI.2022.34.15.COLOR-141} } |
Nguyen M, Thomas J-B and Farup I (2022), "Statistical Analysis of Sparkle in Snow Images", Color and Imaging Conference / JIST FIRST. , pp. pp 050404-1 - 050404-11. |
Abstract: Sparkle from snow is a common phenomenon in Nature but not well studied in the literature. We perform a statistical study on digital snow images captured in-situ to analyze sparkle events by using contrast and density of sparkle spots descriptors. The method for measuring sparkles by Ferrero et al. is adapted, tested, and verified to the case of snow. The dataset is divided into three categories representing the type of snow acquired: dense snow, fresh snow, and old snow. Our analysis highlights the link between the sparkle of snow, the nature of snow and its grain structure. Sparkle could thus be a feature used for snow classification. Best student paper award. |
BibTeX:
@article{2022CIC, author = {Nguyen, Matthieu and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Statistical Analysis of Sparkle in Snow Images}, journal = {Color and Imaging Conference / JIST FIRST}, year = {2022}, pages = {pp 050404-1 - 050404-11}, note = {Best student paper award.}, url = {http://jbthomas.org/Journals/2022JIST.pdf}, doi = {10.2352/J.ImagingSci.Technol.2022.66.5.050404} } |
Zendagui A, Goïc GL, Chatoux H, Thomas J-B, Castro Y, Nurit M and Mansouri A (2021), "Quality assessment of dynamic virtual relighting from RTI data: application to the inspection of engineering surfaces", In Fifteenth International Conference on Quality Control by Artificial Vision. Vol. 11794, pp. 94 - 102. SPIE. |
Abstract: This paper aims to evaluate the visual quality of the dynamic relighting of manufactured surfaces from Reflectance Transformation Imaging acquisitions. The first part of the study aimed to define the optimum parameters of acquisition using the RTI system: Exposure time, Gain, Sampling density. The second part is the psychometric experiment using the Design of Experiments approach. The results of this study help us to determine the influence of the parameters associated with the acquisition of Reflectance Transformation Imaging data, the models associated with relighting, and the dynamic perception of the resulting videos |
BibTeX:
@inproceedings{2021QCAV, author = {Abir Zendagui and Gaëtan Le Goïc and Hermine Chatoux and Jean-Baptiste Thomas and Yuly Castro and Marvin Nurit and Alamin Mansouri}, editor = {Takashi Komuro and Tsuyoshi Shimizu}, title = {Quality assessment of dynamic virtual relighting from RTI data: application to the inspection of engineering surfaces}, booktitle = {Fifteenth International Conference on Quality Control by Artificial Vision}, publisher = {SPIE}, year = {2021}, volume = {11794}, pages = {94 -- 102}, url = {http://jbthomas.org/Conferences/2021QCAV.pdf}, doi = {10.1117/12.2589178} } |
Russo S, Brambilla L, Thomas J-B and Joseph E (2021), "The formation of metal soaps: model samples for painted metals degradation", In ICOM Metal France 2021. Paris, France, January, 2021. , pp. -. |
Abstract: La peinture sur des substrats métalliques est une pratique largement rapportée dans l’histoire de l’art [1][2]. De nombreux artistes de renom comme Peter Paul Rubens (Le Jugement de Pâris, c. 1606, Académie des Beaux-Arts, Vienne), Rembrandt (Autoportrait, 1630, Musée national, Stockholm) et, plus récemment, des artistes contemporains comme Frida Kahlo (Mémoire, le coeur, 1937, Collection Michel Petitjean, Paris), Alexander Calder (Mobile, 1941, Metropolitan Museum of Art, USA) et Frank Stella (The Science of Laziness, 1984 , National Gallery of Art, Washington DC, USA), ont choisi la peinture à l’huile et les supports métalliques comme matériaux pour leur expression artistique.
Des dégradations similaires aux peintures sur toile ont été identifiées sur des oeuvres d’art en métal peintes à l’huile, notamment la formation de savons métalliques [3][4]. L’étude présentée vise à développer une stratégie analytique appropriée, basée sur une approche multimodale, pour la détection de ces savons métalliques dans le cas des métaux peints. L’utilisation d’échantillons-modèles pour évaluer la formation de savons métalliques est décrite ici. Deux substrats métalliques ont été choisis en fonction de leur capacité à produire des savons métalliques et sur la base d’une revue des pratiques artistiques entre 1600 et 1900. Des coupons de cuivre et de zinc ont été recouverts d’huile de lin pressée à froid et soumis à un vieillissement accéléré afin d’induire la dégradation étudiée. Le processus a été suivi par spectroscopie Infrarouge à Transformée de Fourier (IRTF/FTIR). Les résultats préliminaires du protocole de vieillissement accéléré sont présentés ici. Cette recherche est réalisée à la Haute Ecole Arc Conservation Restauration à Neuchâtel (Suisse) dans le cadre du projet européen ITN-CHANGE (Marie Sklodowska-Curie Innovative Training Networks Programme no. 813789, www.change-itn.eu). [1] Albini, M., Ingo, G. M., Riccucci, C., Staccioli, M. P., Giuliani, C., Di Carlo, G., Messina, E., Pascucci, M. (2019). The INTERFACE Project: conservation of painted metal artefacts. In: Metal2019 Proceedings of the Interim Meeting of the ICOM-CC Metals Working Group, 2-6 September 2019. Neuchâtel (Switzerland), 470. [2] Gordon, J., Normand, L., Genachte-Le Bail, A., Loeper-Attia, M.-A., Catillon, R., Carré A.-L., Saheb, M., Geffroy, A.-M., Paris, C., Bellot-Gurlet, L., and Reguer, S. (2019). New Strategies for the conservation of paintings on metal. In: Metal2019 Proceedings of the Interim Meeting of the ICOM-CC Metals Working Group, 2-6 September 2019. Neuchâtel (Switzerland), 369. [3] VV.AA. (2017). Paintings on copper and other metal plates. Production, degradation and conservation issues. Fauster López Laura, Chuliá Blanco, I., Sarrió Martín, M.F., Vázquez de Ágredos Pascual, M.L., Carlyle, L., Wadum, J. (Eds). In: Proceedings of the symposium "La Pintura Sobre Cobre (y Otras Planchas Metálicas), Producción, Degradación y Conservación", January 27-28, 2017, Universitat Politècnica de València, Valencia (Spain). [4] VV.AA. (2019). Metal soaps in art. Conservation and Research. Casadio, F., Keune, K., Noble, P., van Loon, A., Hendriks, E., Centeno, S.A., Osmond, G. (Eds.). Springer. |
BibTeX:
@inproceedings{2021ICOM, author = {Russo, Silvia and Brambilla, Laura and Thomas, Jean-Baptiste and Joseph, Edith}, title = {The formation of metal soaps: model samples for painted metals degradation}, booktitle = {ICOM Metal France 2021}, year = {2021}, pages = {-}, url = {http://jbthomas.org/Conferences/2021ICOM.pdf} } |
Nguyen M, Thomas J-B and Farup I (2021), "Investigating the Kokhanovsky snow reflectance model in closerange spectral imaging", Color and Imaging Conference. Vol. 2021(29), pp. 31-36. |
Abstract: The internal structure of the snow and its reflectance function play a major contribution in its appearance. We investigate the snow reflectance model introduced by Kokhanovsky and Zege in a close-range imaging scale. By monitoring the evolution of melting snow through time using hyperspectral cameras in a laboratory, we estimate snow grain sizes from 0.24 to 8.49 mm depending on the grain shape assumption chosen. Using our experimental results, we observe differences in the reconstructed reflectance spectra with the model regarding the spectra's shape or magnitude. Those variations may be due to our data or to the grain shape assumption of the model. We introduce an effective parameter describing both the snow grain size and the snow grain shape, to give us the opportunity to select the adapted assumption. The computational technique is ready, but more ground truths are required to validate the model. |
BibTeX:
@article{2021CICe, author = {Nguyen, Mathieu and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Investigating the Kokhanovsky snow reflectance model in closerange spectral imaging}, journal = {Color and Imaging Conference}, year = {2021}, volume = {2021}, number = {29}, pages = {31-36}, url = {http://jbthomas.org/Conferences/2021eCIC.pdf}, doi = {10.2352/issn.2169-2629.2021.29.31} } |
Gigilashvili D, Urban P, Thomas J-B, Pedersen M and Yngve Hardeberg J (2021), "Perceptual Navigation in Absorption-Scattering Space", Color and Imaging Conference. Vol. 2021(29), pp. 328-333. |
Abstract: Translucency optically results from subsurface light transport and plays a considerable role in how objects and materials appear. Absorption and scattering coefficients parametrize the distance a photon travels inside the medium before it gets absorbed or scattered, respectively. Stimuli produced by a material for a distinct viewing condition are perceptually non-uniform w.r.t. these coefficients. In this work, we use multi-grid optimization to embed a non-perceptual absorption-scattering space into a perceptually more uniform space for translucency and lightness. In this process, we rely on A (alpha) as a perceptual translucency metric. Small Euclidean distances in the new space are roughly proportional to lightness and apparent translucency differences measured with A. This makes picking A more practical and predictable, and is a first step toward a perceptual translucency space. |
BibTeX:
@article{2021CICd, author = {Gigilashvili, Davit and Urban, Philipp and Thomas, Jean-Baptiste and Pedersen, Marius and Yngve Hardeberg, Jon}, title = {Perceptual Navigation in Absorption-Scattering Space}, journal = {Color and Imaging Conference}, year = {2021}, volume = {2021}, number = {29}, pages = {328-333}, url = {http://jbthomas.org/Conferences/2021dCIC.pdf}, doi = {10.2352/issn.2169-2629.2021.29.328} } |
Grillini F, Thomas J-B and George S (2021), "Radiometric spectral fusion of VNIR and SWIR hyperspectral cameras", Color and Imaging Conference. Vol. 2021(29), pp. 276-281. |
Abstract: When two hyperspectral cameras are sensitive to complementary portions of the electromagnetic spectrum it is fundamental that the calibration processes conducted independently lead to comparable radiance values, especially if the cameras show a shared spectral interval. However, in practice, a perfect matching is hard to obtain, and radiance values that are expected to be similar might differ significantly. In the present study we propose to introduce an ulterior linear correcting factor in the radiometric calibration pipeline of two hyperspectral cameras, operating in the visible near infrared (VNIR) and short wave infrared (SWIR) intervals. The linearity properties of both cameras are preliminarily assessed, conducting acquisitions on five standardized targets, and highlighting noise at the sensors level and different illumination fields as the main causes of radiance mismatch. The correction step that we propose allows the retrieval of accurate and smoothly connected VNIR-SWIR reflectance factor curves. |
BibTeX:
@article{2021CICc, author = {Grillini, Federico and Thomas, Jean-Baptiste and George, Sony}, title = {Radiometric spectral fusion of VNIR and SWIR hyperspectral cameras}, journal = {Color and Imaging Conference}, year = {2021}, volume = {2021}, number = {29}, pages = {276-281}, url = {http://jbthomas.org/Conferences/2021cCIC.pdf}, doi = {10.2352/issn.2169-2629.2021.29.276} } |
Spote A, Lapray P-J, Thomas J-B and Farup I (2021), "Joint demosaicing of colour and polarisation from filter arrays", Color and Imaging Conference. Vol. 2021(29), pp. 288-293. |
Abstract: This article considers the joint demosaicing of colour and polarisation image content captured with a Colour and Polarisation Filter Array imaging system. The Linear Minimum Mean Square Error algorithm is applied to this case, and its performance is compared to the state-of-theart Edge-Aware Residual Interpolation algorithm. Results show that the LMMSE demosaicing method gives statistically higher scores on the largest tested database, in term of peak signal-to-noise ratio relatively to a CPFA-dedicated algorithm. |
BibTeX:
@article{2021CICb, author = {Spote, Alexandra and Lapray, Pierre-Jean and Thomas, Jean-Baptiste and Farup, Ivar}, title = {Joint demosaicing of colour and polarisation from filter arrays}, journal = {Color and Imaging Conference}, year = {2021}, volume = {2021}, number = {29}, pages = {288-293}, url = {http://jbthomas.org/Conferences/2021bCIC.pdf}, doi = {10.2352/issn.2169-2629.2021.29.288} } |
Kitanovski V, Thomas J-B and Yngve Hardeberg J (2021), "Reflectance estimation from snapshot multispectral images captured under unknown illumination", Color and Imaging Conference. Vol. 2021(29), pp. 264-269. |
Abstract: Multispectral images contain more spectral information of the scene objects compared to color images. The captured information of the scene reflectance is affected by several capture conditions, of which the scene illuminant is dominant. In this work, we implemented an imaging pipeline for a spectral filter array camera, where the focus is the estimation of the scene reflectances when the scene illuminant is unknown. We simulate three scenarios for reflectance estimation from multispectral images, and we evaluate the estimation accuracy on real captured data. We evaluate two camera model-based reflectance estimation methods that use a Wiener filter, and two other linear regression models for reflectance estimation that do not require an image formation model of the camera. Regarding the model-based approaches, we propose to use an estimate for the illuminant's spectral power distribution. The results show that our proposed approach stabilizes and marginally improves the estimation accuracy over the method that estimates the illuminant in the sensor space only. The results also provide a comparison of reflectance estimation using common approaches that are suited for different realistic scenarios. |
BibTeX:
@article{2021CICa, author = {Kitanovski, Vlado and Thomas, Jean-Baptiste and Yngve Hardeberg, Jon}, title = {Reflectance estimation from snapshot multispectral images captured under unknown illumination}, journal = {Color and Imaging Conference}, year = {2021}, volume = {2021}, number = {29}, pages = {264-269}, url = {http://jbthomas.org/Conferences/2021aCIC.pdf}, doi = {10.2352/issn.2169-2629.2021.29.264} } |
Gigilashvili D, Thomas J-B, Hardeberg JY and Pedersen M (2020), "On the Nature of Perceptual Translucency", In Workshop on Material Appearance Modeling. The Eurographics Association. |
Abstract: Translucency is an appearance attribute used to characterize materials with some degree of subsurface light transport. Although translucency as a radiative transfer inside the medium is relatively well understood, translucency as a perceptual attribute leaves much room for interpretation. Our understanding of the translucency perception mechanisms of the human visual system remains limited. No agreement exists on how to quantify perceived translucency, how to compare translucency of multiple objects and materials, how translucency relates to transparency and opacity, and what are the perceptual dimensions of it. We highlight the challenges in perception research arisen by these ambiguities and argue for the need for standardization. Talk at: https://www.youtube.com/watch?v=0ppdTwPj5EI&t=914s |
BibTeX:
@inproceedings{2020MAM, author = {Gigilashvili, Davit and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Pedersen, Marius}, editor = {Klein, Reinhard and Rushmeier, Holly}, title = {On the Nature of Perceptual Translucency}, booktitle = {Workshop on Material Appearance Modeling}, publisher = {The Eurographics Association}, year = {2020}, url = {http://jbthomas.org/Conferences/2020MAM.pdf}, doi = {10.2312/mam.20201141} } |
El Khoury J, Thomas J-B and Mansouri A (2020), "A Spectral Hazy Image Database", In Image and Signal Processing. Cham , pp. 44-53. Springer International Publishing. |
Abstract: We introduce a new database to promote visibility enhancement techniques intended for spectral image dehazing. SHIA (Spectral Hazy Image database for Assessment) is composed of two real indoor scenes M1 and M2 of 10 levels of fog each and their corresponding fog-free (ground-truth) images, taken in the visible and the near infrared ranges every 10 nm starting from 450 to 1000 nm. The number of images that form SHIA is 1540 with a size of $$1312backslash,backslashtimes backslash,1082$$pixels. All images are captured under the same illumination conditions. Three of the well-known dehazing image methods based on different approaches were adjusted and applied on the spectral foggy images. This study confirms once again a strong dependency between dehazing methods and fog densities. It urges the design of spectral-based image dehazing able to handle simultaneously the accurate estimation of the parameters of the visibility degradation model and the limitation of artifacts and post-dehazing noise. The database can be downloaded freely at http://chic.u-bourgogne.fr. |
BibTeX:
@inproceedings{2020ICISP, author = {El Khoury, Jessica and Thomas, Jean-Baptiste and Mansouri, Alamin}, editor = {El Moataz, Abderrahim and Mammass, Driss and Mansouri, Alamin and Nouboud, Fathallah}, title = {A Spectral Hazy Image Database}, booktitle = {Image and Signal Processing}, publisher = {Springer International Publishing}, year = {2020}, pages = {44--53}, url = {http://jbthomas.org/Conferences/2020ICISP.pdf} } |
Thomas J-B and Hardeberg JY (2020), "How to Look at Spectral Images? A Tentative Use of Metameric Black for Spectral Image Visualisation", In Colour and Visual Computing Symposium 2020. Aachen (2688), pp. 1-11. |
Abstract: The number of bands of a spectral image makes its visualisation as a traditional colour image a challenge. Several directions are investigated in the literature. The state-of-the-art solutions are all limited, either due to the reduced quantity of information displayed, or to such a severe reduction in naturalness or image quality that it is hard to analyse visually. This article surveys the different attempts and investigates a direction that uses a pair of images rather than a single image. We use the principle of metameric black to provide a dual image for visualisation. One image is then a colorimetric image that encompasses the fundamental metamer information, the other one is based on the metameric black and contains extra information related to the spectral nature of the signal. We show that in the case of metameric samples, this visualisation is useful to provide additional information. |
BibTeX:
@inproceedings{2020CVCSb, author = {Thomas, Jean-Baptiste and Hardeberg, Jon Yngve}, editor = {Thomas, Jean-Baptiste and Guarnera, Giuseppe Claudio and George, Sony and Nussbaum, Peter and Amirshahi, Seyed Ali and Kitanovski, Vlado}, title = {How to Look at Spectral Images? A Tentative Use of Metameric Black for Spectral Image Visualisation}, booktitle = {Colour and Visual Computing Symposium 2020}, year = {2020}, number = {2688}, pages = {1--11}, url = {http://ceur-ws.org/Vol-2688} } |
Grillini F, Thomas J-B and George S (2020), "Linear, Subtractive and Logarithmic Optical Mixing Models in Oil Painting", In Colour and Visual Computing Symposium 2020. Aachen (2688), pp. 1-16. |
Abstract: Identifying the pigments and their abundances in the mixtures of one artist’s masterpiece is of fundamental importance for the preservation of the artifact. The reflectance spectrum of mixtures of pigments can be described by modeling the spectral signature of each component, following different rules and physical laws. We analyze and invert nine different mixing models, in order to perform Spectral Unmixing, using as targets two sets of mock-ups. Based on the results of the spectral reconstruction errors, we are able to point out that three models are best suited to describe the phenomenon: subtractive model, its derivation with extra parameters, and the linear model adapted with extra parameters. |
BibTeX:
@inproceedings{2020CVCSa, author = {Grillini, Federico and Thomas, Jean-Baptiste and George, Sony}, editor = {Thomas, Jean-Baptiste and Guarnera, Giuseppe Claudio and George, Sony and Nussbaum, Peter and Amirshahi, Seyed Ali and Kitanovski, Vlado}, title = {Linear, Subtractive and Logarithmic Optical Mixing Models in Oil Painting}, booktitle = {Colour and Visual Computing Symposium 2020}, year = {2020}, number = {2688}, pages = {1--16}, url = {http://ceur-ws.org/Vol-2688} } |
Tian Y, Thomas J-B and Mirjalili F (2020), "The Impact of Individual Observer Color Matching Functions on Simulated Texture Features", In Proceedings of the International Colour Association (AIC)Conference 2020.. Avignon, France, November, 2020. , pp. 407-412. |
Abstract: We investigated the impact of simulating individual observer color matching functions (CMF) on texture features. Our hypothesis is that most people perceive texture in a similar manner, thus a texture indicator that is least dependent on individual physiology of human vision would be most likely a potential fit to visually perceived texture. To this end, the following strategy was implemented: Hyper-spectral images were converted into XYZ images for individual observer CMFs, estimated by an individual observer colorimetric model. Contrast sensitivity function (CSF) filtering was applied to the XYZ images for visual simulation. Two types of texture features were extracted from the filtered images. Finally, the difference between the texture features computed for two observer groups with different variance in CMFs was analyzed. The results obtained for this two simulated texture features could explain our hypothesis, however this is a preliminary investigation and requires further test and analysis to develop stronger observations. Best student paper award |
BibTeX:
@inproceedings{2020AICc, author = {Tian, Yuan and Thomas, Jean-Baptiste and Mirjalili, Fereshteh}, title = {The Impact of Individual Observer Color Matching Functions on Simulated Texture Features}, booktitle = {Proceedings of the International Colour Association (AIC)Conference 2020.}, year = {2020}, pages = {407-412}, note = {Best student paper award (3rd).}, url = {http://jbthomas.org/Conferences/2020cAIC.pdf} } |
Min-Ho J, Thomas J-B, Pedersen M, Cheung V and Rhodes P (2020), "Effect-coating glint according to binocular and monocular vision", In Proceedings of the International Colour Association (AIC)Conference 2020.. Avignon, France, November, 2020. , pp. 271-275. |
Abstract: This study investigates the impact of two kinds of viewing conditions in glint perception for physical samples. The first aim is to verify how perceptual glint is influenced by two visual modes of observers: binocular and monocular vision. The second is to identify the difference in glint perception between two kinds of surface finishing: rough and smooth. A psychophysical experiment was conducted using 11 glint samples. They were assessed under four conditions which are the combinations of two visual modes and two physical conditions. The data of experiment results was statistically analysed by interpreting the box plot and verifying the results using sign test. The perceptual glint was assessed highly when a sample has bigger glint flake size on smooth surface by binocular vision rather than on rough on monocular. |
BibTeX:
@inproceedings{2020AICb, author = {Min-Ho, Jung and Thomas, Jean-Baptiste and Pedersen, Marius and Cheung, Vien and Rhodes, Peter}, title = {Effect-coating glint according to binocular and monocular vision}, booktitle = {Proceedings of the International Colour Association (AIC)Conference 2020.}, year = {2020}, pages = {271-275}, url = {http://jbthomas.org/Conferences/2020bAIC.pdf} } |
Grillini F, Thomas J-B and George S (2020), "Mixing models in close-range spectral imaging for pigment mapping in Cultural Heritage", In Proceedings of the International Colour Association (AIC)Conference 2020.. Avignon, France, November, 2020. , pp. 338-342. |
Abstract: Pigment mapping is a fundamental tool in the field of conservation of cultural heritage paintings. It allows the identification of the pigments, their estimation of their relative concentrations, and their monitoring. In this work, we propose and analyze the spectral unmixing performances of seven optical mixing models, in order to understand which one is the most suited in a possible real-case application. Using a pigments palette inspired by the Renaissance period, we realize a set of mock-ups to test the models. The best results are obtained with models with a subtractive nature. The purely subtractive model is then tested on a case-study painting performed with the same set of pigments, in order to produce concentration maps related to each one of the primaries. Best student paper award |
BibTeX:
@inproceedings{2020AICa, author = {Grillini, Federico and Thomas, Jean-Baptiste and George, Sony}, title = {Mixing models in close-range spectral imaging for pigment mapping in Cultural Heritage}, booktitle = {Proceedings of the International Colour Association (AIC)Conference 2020.}, year = {2020}, pages = {338-342}, note = {Best student paper award (1rst).}, url = {http://jbthomas.org/Conferences/2020aAIC.pdf} } |
Gigilashvili D, Thomas J-B, Pedersen M and Hardeberg JY (2019), "Material Appearance: Ordering and Clustering", Electronic Imaging. Vol. 2019(6), pp. 202-1-202-7. |
Abstract: Appearance is a complex psychovisual phenomenon impacted by various objective and subjective factors that are not yet fully understood. In this work we use real objects and unconstrained conditions to study appearance perception in human subjects, allowing free interaction between objects and observers. Human observers were asked to describe resin objects from an artwork collection and to complete two visual tasks of appearance-based clustering and ordering. The process was filmed for subsequent analysis with the consent of the observers. While clustering task helps us identify attributes people use to assess appearance similarity and difference, the ordering task is used to identify potential cues to create an appearance ordering system.
Finally, we generate research hypotheses about how people perceive appearance and outline future studies to validate them. Preliminary observations revealed interesting cross-individual consistency in appearance assessment, while personal background of the observer might be affecting deviation from the general appearance assessment trends. On the other hand, no appearance ordering system stood out from the rest that might be explained with the sparse sampling of our dataset. |
BibTeX:
@article{2019EI, author = {Gigilashvili, Davit and Thomas, Jean-Baptiste and Pedersen, Marius and Hardeberg, Jon Yngve}, title = {Material Appearance: Ordering and Clustering}, journal = {Electronic Imaging}, year = {2019}, volume = {2019}, number = {6}, pages = {202-1-202-7}, url = {http://jbthomas.org/Conferences/2019EI.pdf} } |
Sole A, Gigilashvili D, Midtfjord H, Guarnera D, Guarnera GC, Thomas J-B and Hardeberg JY (2019), "On the Acquisition and Reproduction of Material Appearance", In Computational Color Imaging. Cham , pp. 26-38. Springer International Publishing. |
Abstract: Currently, new technologies (e.g. 2.5D and 3D printing processes) progress at a fast pace in their capacity to (re)produce an ever-broader range of visual aspects. At the same time, a huge research effort is needed to achieve a comprehensive scientific model for the visual sensations we experience in front of an object in its surrounding. Thanks to the projects MUVApp: Measuring and Understanding Visual Appearance funded by the Research Council of Norway, and ApPEARS: Appearance Printing---European Advanced Research School recently granted by the European Union, significant progress is being made on various topics related with acquisition and reproduction of material appearance, and also on the very understanding of appearance. This paper presents recent, ongoing, and planned research in this exciting field, with a specific emphasis on the MUVApp project. |
BibTeX:
@inproceedings{2019CCIW, author = {Sole, Aditya and Gigilashvili, Davit and Midtfjord, Helene and Guarnera, Dar'ya and Guarnera, Giuseppe Claudio and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve}, editor = {Tominaga, Shoji and Schettini, Raimondo and Trémeau, Alain and Horiuchi, Takahiko}, title = {On the Acquisition and Reproduction of Material Appearance}, booktitle = {Computational Color Imaging}, publisher = {Springer International Publishing}, year = {2019}, pages = {26--38}, url = {http://jbthomas.org/Conferences/2019CCIW.pdf} } |
Thomas J, Le Goic G, Castro Y, Nurit M, Mansouri A, Pedersen M and Zendagui A (2019), "Quality Assessment of Reconstruction and Relighting from RTI Images: Application to Manufactured Surfaces", In 2019 15th International Conference on Signal-Image Technology Internet-Based Systems (SITIS)., Nov, 2019. , pp. 746-753. |
Abstract: In this paper, we propose to evaluate the quality of the reconstruction and relighting from images acquired by a Reflectance Transformation Imaging (RTI) device. Three relighting models, namely the PTM, HSH and DMD, are evaluated using PSNR and SSIM. A visual assessment of how the reconstructed surfaces are perceived is also carried out through a sensory experiment. This study allows to estimate the relevance of these models to reproduce the appearance of the manufactured surfaces. It also shows that DMD reproduces the most accurate reconstruction/relighting to an acquired measurement and that a higher sampling density don't mean necessarily a higher perceptual quality. |
BibTeX:
@inproceedings{2019bSITISWAI, author = {J. Thomas and G. Le Goic and Y. Castro and M. Nurit and A. Mansouri and M. Pedersen and A. Zendagui}, title = {Quality Assessment of Reconstruction and Relighting from RTI Images: Application to Manufactured Surfaces}, booktitle = {2019 15th International Conference on Signal-Image Technology Internet-Based Systems (SITIS)}, year = {2019}, pages = {746-753}, url = {http://jbthomas.org/Conferences/2019bSITISWAI.pdf}, doi = {10.1109/SITIS.2019.00121} } |
Gigilashvili D, Urban P, Thomas J-B, Hardeberg JY and Pedersen M (2019), "Impact of Shape on Apparent Translucency Differences", Color and Imaging Conference. Vol. 2019(1), pp. 132-137. |
Abstract: Translucency is one of the major appearance attributes. Apparent translucency is impacted by various factors including object shape and geometry. Despite general proposals that object shape and geometry have a significant effect on apparent translucency, no quantification has been made so far. Quantifying and modeling the impact of geometry, as well as comprehensive understanding of the translucency perception process, are a point of not only academic, but also industrial interest with 3D printing as an example among many. We hypothesize that a presence of thin areas in the object facilitates material translucency estimation and changes in material properties have larger impact on apparent translucency of the objects with thin areas. Computergenerated images of objects with various geometry and thickness have been used for a psychophysical experiment in order to quantify apparent translucency difference between objects while varying material absorption and scattering properties. Finally, absorption and scattering difference thresholds where the human visual system starts perceiving translucency difference need to be identified and its consistency needs to be analyzed across different shapes and geometries. |
BibTeX:
@article{2019bCIC, author = {Gigilashvili, Davit and Urban, Philipp and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Pedersen, Marius}, title = {Impact of Shape on Apparent Translucency Differences}, journal = {Color and Imaging Conference}, year = {2019}, volume = {2019}, number = {1}, pages = {132-137}, url = {http://jbthomas.org/Conferences/2019bCIC.pdf}, doi = {10.2352/issn.2169-2629.2019.27.25} } |
Colantoni P, Thomas J, Hebert M and Trémeau A (2019), "An Online Tool for Displaying and Processing Spectral Reflectance Images", In 2019 15th International Conference on Signal-Image Technology Internet-Based Systems (SITIS)., Nov, 2019. , pp. 725-731. |
Abstract: Modern web browsers allow to manipulate different types of multimedia files and can be adapted, with standardized technologies (WebAssembly, WebGL, etc.), to an ever-increasing number of contents. In this article, we describe how we were able to set up the necessary data structures and software techniques to enable web browsers to manipulate and visualize multi-and hyper-spectral images. A demonstrator, based on two images from a SpecimIQ hyperspectral sensor, is also presented as showcase. |
BibTeX:
@inproceedings{2019aSITISWAI, author = {P. Colantoni and J. Thomas and M. Hebert and A. Trémeau}, title = {An Online Tool for Displaying and Processing Spectral Reflectance Images}, booktitle = {2019 15th International Conference on Signal-Image Technology Internet-Based Systems (SITIS)}, year = {2019}, pages = {725-731}, url = {http://jbthomas.org/Conferences/2019aSITISWAI.pdf}, doi = {10.1109/SITIS.2019.00118} } |
Colantoni P, Thomas J, Trémeau A and Hardeberg JY (2019), "Web Technologies Enable Agile Color Management", In 2019 15th International Conference on Signal-Image Technology Internet-Based Systems (SITIS)., Nov, 2019. , pp. 303-310. |
Abstract: With the number of display technologies, cameras, operating systems and software solutions, one of the only technologies that is compatible across this diversity is the web browser. We propose to show that the technologies now present in web browsers allow an indepent management of the color information on large variety of devices. For this purpose we introduce the basic concepts of color management and then we show how to implement them with WebAssembly and WebGL by introducing the concept of WebCMM. A WebCMM adapted for the color management of HTML elements in 2D, 3D but also for virtual environments. Finally, we present how we can implement this WebCMM for a real case of color workflow implemented in a demonstrator web page. |
BibTeX:
@inproceedings{2019aSITISIWECA, author = {P. Colantoni and J. Thomas and A. Trémeau and J. Y. Hardeberg}, title = {Web Technologies Enable Agile Color Management}, booktitle = {2019 15th International Conference on Signal-Image Technology Internet-Based Systems (SITIS)}, year = {2019}, pages = {303-310}, url = {http://jbthomas.org/Conferences/2019aSITISIWECA.pdf}, doi = {10.1109/SITIS.2019.00057} } |
Gigilashvili D, Thomas J-B, Pedersen M and Hardeberg JY (2019), "Perceived Glossiness: Beyond Surface Properties", Color and Imaging Conference. Vol. 2019(1), pp. 37-42. |
Abstract: Gloss is widely accepted as a surface- and illuminationbased property, both by definition and by means of metrology. However, mechanisms of gloss perception are yet to be fully understood. Potential cues generating gloss perception can be a product of phenomena other than surface reflection and can vary from person to person. While human observers are less likely to be capable of inverting optics, they might also fail predicting the origin of the cues. Therefore, we hypothesize that color and translucency could also impact perceived glossiness. In order to validate our hypothesis, we conducted series of psychophysical experiments asking observers to rank objects by their glossiness. The objects had the identical surface geometry and shape but different color and translucency. The experiments have demonstrated that people do not perceive objects with identical surface equally glossy. Human subjects are usually able to rank objects of identical surface by their glossiness. However, the strategy used for ranking varies across the groups of people. Best student paper award |
BibTeX:
@article{2019aCIC, author = {Gigilashvili, Davit and Thomas, Jean-Baptiste and Pedersen, Marius and Hardeberg, Jon Yngve}, title = {Perceived Glossiness: Beyond Surface Properties}, journal = {Color and Imaging Conference}, year = {2019}, volume = {2019}, number = {1}, pages = {37-42}, note = {Best student paper award.}, url = {http://jbthomas.org/Conferences/2019aCIC.pdf}, doi = {10.2352/issn.2169-2629.2019.27.8} } |
Cuevas Valeriano L, Thomas J-B and Benoit A (2018), "Deep learning for dehazing: Benchmark and analysis", In NOBIM. Hafjell, Norway, March, 2018. |
Abstract: We compare a recent dehazing method based on deep learning,
Dehazenet, with traditional state-of-the-art approach, on benchmark data with reference. Dehazenet estimates the depth map from a single color image, which is used to inverse the Koschmieder model of imaging in the presence of haze. In this sense, the solution is still attached to the Koschmieder model. We demonstrate that this method exhibits the same limitation than other inversions of this imaging model. Slides there: http://jbthomas.org/Conferences/2018NOBIMSlides.pdf |
BibTeX:
@inproceedings{2018NOBIM, author = {Cuevas Valeriano, Leonel and Thomas, Jean-Baptiste and Benoit, Alexandre}, title = {Deep learning for dehazing: Benchmark and analysis}, booktitle = {NOBIM}, year = {2018}, note = {Slides there: http://jbthomas.org/Conferences/2018NOBIMSlides.pdf}, url = {http://jbthomas.org/Conferences/2018NOBIM.pdf} } |
Thomas J-B, Deniel A and Hardeberg JY (2018), "The Plastique collection: A set of resin objects for material appearance research", In Proceedings of the XIV Conferenza del colore. Firenze, Italy, September, 2018. , pp. 1-12. |
Abstract: We commissioned an artist to realize a collection of objects for material appearance research. The objects are rectangles, spheres and the Plastique artwork, derived in different aspects according to color, gloss and translucency. The manufacturing processes and technical description are presented in this article. We also provide a structured analysis of the interview of the artist that demonstrates the difficulties to describe appearance and the importance of the viewing conditions. |
BibTeX:
@inproceedings{2018CDC, author = {Jean-Baptiste Thomas and Aurore Deniel and Jon Yngve Hardeberg}, title = {The Plastique collection: A set of resin objects for material appearance research}, booktitle = {Proceedings of the XIV Conferenza del colore}, year = {2018}, pages = {1-12}, url = {http://jbthomas.org/Conferences/2018CDC.pdf} } |
El Khoury J, Thomas J-B and Mansouri A (2018), "Colorimetric screening of the haze model limits", In Image and Signal Processing: 8th International Conference, ICISP 2018, Lecture Notes in Computer Science. Cham, June, 2018. Vol. 10884, pp. 481-489. Springer International Publishing. |
Abstract: The haze model, which describes the degradation of atmospheric visibility, is a good approximation for a wide range of weather conditions and several situations. However, it misrepresents the perceived scenes and causes therefore undesirable results on dehazed images at high densities of fog. In this paper, using data from CHIC database, we investigate the possibility to screen the regions of the hazy image, where the haze model inversion is likely to fail in providing perceptually recognized colors. This study is done upon the perceived correlation between the atmospheric light color and the objects’ colors at various fog densities. Accordingly, at high densities of fog, the colors are badly recovered and do not match the original fog-free image. At low fog densities, the haze model inversion provides acceptable results for a large panel of colors. |
BibTeX:
@inbook{2018bICISP, author = {El Khoury, Jessica and Thomas, Jean-Baptiste and Mansouri, Alamin}, editor = {Mansouri, Alamin and Elmoataz, Abderrahim and Nouboud, Fathallah and Mammass, Driss}, title = {Colorimetric screening of the haze model limits}, booktitle = {Image and Signal Processing: 8th International Conference, ICISP 2018, Lecture Notes in Computer Science}, publisher = {Springer International Publishing}, year = {2018}, volume = {10884}, pages = {481-489}, url = {http://jbthomas.org/Conferences/2018bMCS.pdf}, doi = {10.1007/978-3-319-94211-7_52} } |
Valeriano LC, Thomas J-B and Benoit A (2018), "Deep Learning for Dehazing: Comparison and Analysis", In 2018 Colour and Visual Computing Symposium (CVCS)., Sept, 2018. , pp. 1-6. |
Abstract: We compare a recent dehazing method based on deep learning, Dehazenet, with traditional state-of-the-art approaches, on benchmark data with reference. Dehazenet estimates the depth map from transmission factor on a single color image, which is used to inverse the Koschmieder model of imaging in the presence of haze. In this sense, the solution is still attached to the Koschmieder model. We demonstrate that the transmission is very well estimated by the network, but also that this method exhibits the same limitation than others due to the use of the same imaging model. |
BibTeX:
@inproceedings{2018bCVCS, author = {Leonel Cuevas Valeriano and Jean-Baptiste Thomas and Alexandre Benoit}, title = {Deep Learning for Dehazing: Comparison and Analysis}, booktitle = {2018 Colour and Visual Computing Symposium (CVCS)}, year = {2018}, pages = {1-6}, url = {http://jbthomas.org/Conferences/2018bCVCS.pdf}, doi = {10.1109/CVCS.2018.8496520} } |
Thomas J-B and Farup I (2018), "Demosaicing of Periodic and Random Color Filter Arrays by Linear Anisotropic Diffusion", Color and Imaging Conference. Vol. 2018(1), pp. 203-210. |
Abstract: The authors develop several versions of the diffusion equation to demosaic color filter arrays of any kind. In particular, they compare isotropic versus anisotropic and linear versus non-linear formulations. Using these algorithms, they investigate the effect of mosaics on the resulting demosaiced images. They perform cross analysis on images, mosaics, and algorithms. They find that random mosaics do not perform the best with their algorithms, but rather pseudo-random mosaics give the best results. The Bayer mosaic also shows equivalent results to good pseudo-random mosaics in terms of peak signal-to-noise ratio but causes visual aliasing artifacts. The linear anisotropic diffusion method performs the best of the diffusion versions, at the level of state-of-the-art algorithms. |
BibTeX:
@article{2018bCIC, author = {Thomas, Jean-Baptiste and Farup, Ivar}, title = {Demosaicing of Periodic and Random Color Filter Arrays by Linear Anisotropic Diffusion}, journal = {Color and Imaging Conference}, year = {2018}, volume = {2018}, number = {1}, pages = {203-210}, url = {http://jbthomas.org/Conferences/2018bCIC.pdf}, doi = {10.2352/J.lmagingSci.Technol.2018.62.5.050401} } |
Khan HA, Thomas J-B and Hardeberg J (2018), "Towards highlight based illuminant estimation in multispectral images", In Image and Signal Processing: 8th International Conference, ICISP 2018, Lecture Notes in Computer Science. Cham, June, 2018. Vol. 10884, pp. 517-525. Springer International Publishing. |
Abstract: We review the physics based illuminant estimation methods, which extract information from highlights in images. Such highlights are caused by specular reflection from the surface of dielectric materials, and according to the dichromatic reflection model, provide cues about the illumination. This paper analyzes different categories of highlight based illuminant estimation techniques for color images from the point of view of their extension to multispectral imaging. We find that the use of chromaticity space for multispectral imaging is not straightforward and imposing constraints on illuminants in the multispectral imaging domain may not be efficient either. We identify some methods that are feasible for extension to multispectral imaging, and discuss the advantage of using highlight information for illuminant estimation. |
BibTeX:
@inbook{2018aICISP, author = {Khan, Haris Ahmad and Thomas, Jean-Baptiste and Hardeberg, Jon}, editor = {Mansouri, Alamin and Elmoataz, Abderrahim and Nouboud, Fathallah and Mammass, Driss}, title = {Towards highlight based illuminant estimation in multispectral images}, booktitle = {Image and Signal Processing: 8th International Conference, ICISP 2018, Lecture Notes in Computer Science}, publisher = {Springer International Publishing}, year = {2018}, volume = {10884}, pages = {517-525}, url = {http://jbthomas.org/Conferences/2018aMCS.pdf}, doi = {10.1007/978-3-319-94211-7_56} } |
Gigilashvili D, Hardeberg JY and Thomas J-B (2018), "Comparison of Mosaic Patterns for Spectral Filter Arrays", In 2018 Colour and Visual Computing Symposium (CVCS)., Sept, 2018. , pp. 1-6. |
Abstract: Spectral Filter Arrays allow snapshot multispectral acquisition within a compact camera. While Bayer filter mosaic is a widely accepted standard for color filter arrays, no single mosaic pattern is considered dominant for spectral filter arrays. We compare different patterns for 8-band mosaics, and their overall performance in terms of spectral reconstruction, as well as color and structure reproduction. We demonstrate that some mosaics having overrepresentation of certain filters perform better than the ones with overrepresentaiton of other filters, while the arrays having all filters equally represented perform better than the arrays with overrepresentation. |
BibTeX:
@inproceedings{2018aCVCS, author = {Davit Gigilashvili and Jon Yngve Hardeberg and Jean-Baptiste Thomas}, title = {Comparison of Mosaic Patterns for Spectral Filter Arrays}, booktitle = {2018 Colour and Visual Computing Symposium (CVCS)}, year = {2018}, pages = {1-6}, url = {http://jbthomas.org/Conferences/2018aCVCS.pdf}, doi = {10.1109/CVCS.2018.8496717} } |
Gigilashvili D, Thomas J-B, Hardeberg JY and Pedersen M (2018), "Behavioral Investigation of Visual Appearance Assessment", Color and Imaging Conference. Vol. 2018(1), pp. 294-299. |
Abstract: The way people judge, assess and express appearance they perceive can dramatically vary from person to person. The objective of this study is to identify the research hypotheses and outline directions for the future work based on the tasks observers perform. The eventual goal is to understand how people perceive, judge, and assess appearance, and what are the factors impacting their assessments. A series of interviews were conducted in uncontrolled conditions where observers were asked to describe the appearance of the physical objects and to complete simple visual tasks, like ranking objects by their gloss or translucency. The interviews were filmed with the consent of the participants and the videos were subsequently analyzed. The analysis of the data has shown that while there are cross-individual differences and similarities, surface coarseness, shape, and dye mixture have significant effect on translucency and gloss perception. |
BibTeX:
@article{2018aCIC, author = {Gigilashvili, Davit and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Pedersen, Marius}, title = {Behavioral Investigation of Visual Appearance Assessment}, journal = {Color and Imaging Conference}, year = {2018}, volume = {2018}, number = {1}, pages = {294-299}, url = {http://jbthomas.org/Conferences/2018aCIC.pdf}, doi = {10.2352/ISSN.2169-2629.2018.26.294} } |
Mihoubi S, Mathon B, Thomas J-B, Losson O and Macaire L (2017), "Illumination-robust multispectral demosaicing", In The six IEEE International Conference on Image Processing Theory, Tools and Applications IPTA. Montreal, Canada, November, 2017. |
Abstract: Snapshot multispectral cameras that are equipped
with filter arrays acquire a raw image that represents the radiance of a scene over the electromagnetic spectrum at video rate. These cameras require a demosaicing procedure to estimate a multispectral image with full spatio-spectral definition. Such a procedure is based on spectral correlation properties that are sensitive to illumination. In this paper, we first highlight the influence of illumination on demosaicing performances. Then we propose camera-, illumination-, and raw image-based normalisations that make demosaicing robust to illumination. Experimental results on state-of-the-art demosaicing algorithms show that such normalisations improve the quality of multispectral images estimated from raw images acquired under various illuminations. |
BibTeX:
@inproceedings{2017IPTA, author = {Mihoubi, Sofiane and Mathon, Benjamin and Thomas, Jean-Baptiste and Losson, Olivier and Macaire, Ludovic}, title = {Illumination-robust multispectral demosaicing}, booktitle = {The six IEEE International Conference on Image Processing Theory, Tools and Applications IPTA}, year = {2017}, url = {http://jbthomas.org/Conferences/2017IPTA.pdf} } |
Ansari K, Thomas J-B and Gouton P (2017), "Spectral band Selection Using a Genetic Algorithm Based Wiener Filter Estimation Method for Reconstruction of Munsell Spectral Data", Electronic Imaging. Vol. 2017(18), pp. 190-193. |
Abstract: Spectrophotometers are the common devices for reflectance measurements. However, there are some drawbacks associated with these devices. Price, sample size and physical state are the main difficulties in applying them for reflectance measurement. Spectral estimation using a set of camera-filters
is the eligibly solution for avoiding these difficulties. Meanwhile band selection of filters are needed to be optimized in order to apply in imaging systems. In the present study, the Genetic algorithm was applied for finding the best set of three to eight filters combinations with specific FWHM. The algorithm tries to minimize the color difference between reconstructed and actual spectral data, assuming a simulation of imaging system. This imaging system is composed of a CMOS sensor, illuminant and 1269 matt Munsell spectral data set as the object. All simulations were done in visible spectrum. The optimized filter selections were modeled on a CMOS sensor in order to spectral reflectance reconstruction. The results showed no significant improvement after selecting a seven filter set although a descending trend in the color difference errors was obtained with increasing the number of filters. |
BibTeX:
@article{2017EI, author = {Ansari, Keivan and Thomas, Jean-Baptiste and Gouton, Pierre}, title = {Spectral band Selection Using a Genetic Algorithm Based Wiener Filter Estimation Method for Reconstruction of Munsell Spectral Data}, journal = {Electronic Imaging}, year = {2017}, volume = {2017}, number = {18}, pages = {190-193}, url = {http://jbthomas.org/Conferences/2017EI.pdf}, doi = {10.2352/ISSN.2470-1173.2017.18.COLOR-059} } |
Thomas J-B, Hardeberg JY and Simone G (2017), "Image Contrast Measure as a Gloss Material Descriptor", In Computational Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy, March 29-31, 2017, Proceedings. Cham , pp. 233-245. Springer International Publishing. |
BibTeX:
@inbook{2017cCCIW, author = {Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Simone, Gabriele}, editor = {Bianco, Simone and Schettini, Raimondo and Trémeau, Alain and Tominaga, Shoji}, title = {Image Contrast Measure as a Gloss Material Descriptor}, booktitle = {Computational Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy, March 29-31, 2017, Proceedings}, publisher = {Springer International Publishing}, year = {2017}, pages = {233--245}, url = {http://jbthomas.org/Conferences/2017cCCIW.pdf}, doi = {10.1007/978-3-319-56010-6_20} } |
Khan HA, Thomas JB and Hardeberg JY (2017), "Multispectral Constancy Based on Spectral Adaptation Transform", In Image Analysis: 20th Scandinavian Conference, SCIA 2017, Tromsø, Norway, June 12--14, 2017, Proceedings, Part II. Cham , pp. 459-470. Springer International Publishing. |
BibTeX:
@inbook{2017bSCIA, author = {Khan, Haris Ahmad and Thomas, Jean Baptiste and Hardeberg, Jon Yngve}, editor = {Sharma, Puneet and Bianchi, Filippo Maria}, title = {Multispectral Constancy Based on Spectral Adaptation Transform}, booktitle = {Image Analysis: 20th Scandinavian Conference, SCIA 2017, Tromsø, Norway, June 12--14, 2017, Proceedings, Part II}, publisher = {Springer International Publishing}, year = {2017}, pages = {459--470}, url = {http://jbthomas.org/Conferences/2017bSCIA.pdf}, doi = {10.1007/978-3-319-59129-2_39} } |
Amba P, Thomas JB and Alleysson D (2017), "N-LMMSE Demosaicing for Spectral Filter Arrays", Color and Imaging Conference. Vol. 61(4), pp. 40407-1-40407-11. |
Abstract: Abstract
Spectral filter array (SFA) technology requires development on demosaicing. The authors extend the linear minimum mean square error with neighborhood method to the spectral dimension. They demonstrate that the method is fast and general on Raw SFA images that span the visible and near infra-red part of the electromagnetic range. The method is quantitatively evaluated in simulation first, then the authors evaluate it on real data by the use of non-reference image quality metrics applied on each band. Resulting images show a much better reconstruction of text and high frequencies at the expense of a zipping effect, compared to the benchmark binary-tree method. |
BibTeX:
@article{2017bCIC, author = {Amba, Prakhar and Thomas, Jean Baptiste and Alleysson, David}, title = {N-LMMSE Demosaicing for Spectral Filter Arrays}, journal = {Color and Imaging Conference}, year = {2017}, volume = {61}, number = {4}, pages = {40407-1-40407-11}, url = {http://jbthomas.org/Conferences/2017bCIC.pdf}, doi = {10.2352/J.ImagingSci.Technol.2017.61.4.040407} } |
Khan HA, Thomas J-B and Hardeberg JY (2017), "Analytical Survey of Highlight Detection in Color and Spectral Images", In Computational Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy, March 29-31, 2017, Proceedings. Cham , pp. 197-208. Springer International Publishing. |
BibTeX:
@inbook{2017bCCIW, author = {Khan, Haris Ahmad and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve}, editor = {Bianco, Simone and Schettini, Raimondo and Trémeau, Alain and Tominaga, Shoji}, title = {Analytical Survey of Highlight Detection in Color and Spectral Images}, booktitle = {Computational Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy, March 29-31, 2017, Proceedings}, publisher = {Springer International Publishing}, year = {2017}, pages = {197--208}, url = {http://jbthomas.org/Conferences/2017bCCIW.pdf}, doi = {10.1007/978-3-319-56010-6_17} } |
Thomas J-B, Lapray P-J and Gouton P (2017), "HDR Imaging Pipeline for Spectral Filter Array Cameras", In Image Analysis: 20th Scandinavian Conference, SCIA 2017, Tromsø, Norway, June 12--14, 2017, Proceedings, Part II. Cham , pp. 401-412. Springer International Publishing. |
BibTeX:
@inbook{2017aSCIA, author = {Thomas, Jean-Baptiste and Lapray, Pierre-Jean and Gouton, Pierre}, editor = {Sharma, Puneet and Bianchi, Filippo Maria}, title = {HDR Imaging Pipeline for Spectral Filter Array Cameras}, booktitle = {Image Analysis: 20th Scandinavian Conference, SCIA 2017, Tromsø, Norway, June 12--14, 2017, Proceedings, Part II}, publisher = {Springer International Publishing}, year = {2017}, pages = {401--412}, url = {http://jbthomas.org/Conferences/2017aSCIA.pdf}, doi = {10.1007/978-3-319-59129-2_34} } |
de Dravo VW, Khoury JE, Thomas JB, Mansouri A and Hardeberg JY (2017), "An Adaptive Combination of Dark and Bright Channel Priors for Single Image Dehazing", Color and Imaging Conference. Vol. 2017(25), pp. 226-234. |
Abstract: Dehazing methods based on prior assumptions derived from statistical image properties fail when these properties do not hold. This is most likely to happen when the scene contains large bright areas, such as snow and sky, due to the ambiguity between the airlight and the depth information.
This is the case for the popular dehazing method Dark Channel Prior. In order to improve its performance, the authors propose to combine it with the recent multiscale STRESS, which serves to estimate Bright Channel Prior. Visual and quantitative evaluations show that this method outperforms Dark Channel Prior and competes with the most robust dehazing methods, since it separates bright and dark areas and therefore reduces the color cast in very bright regions. textcopyright 2017 Society for Imaging Science and Technology. |
BibTeX:
@article{2017aCIC, author = {de Dravo, Vincent Whannou and Khoury, Jessica El and Thomas, Jean Baptiste and Mansouri, Alamin and Hardeberg, Jon Yngve}, title = {An Adaptive Combination of Dark and Bright Channel Priors for Single Image Dehazing}, journal = {Color and Imaging Conference}, year = {2017}, volume = {2017}, number = {25}, pages = {226-234}, url = {http://jbthomas.org/Conferences/2017aCIC.pdf}, doi = {10.2352/J.ImagingSci.Technol.2017.61.4.040408} } |
Lapray P-J, Thomas J-B and Gouton P (2017), "A Database of Spectral Filter Array Images that Combine Visible and NIR", In Computational Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy, March 29-31, 2017, Proceedings. Cham , pp. 187-196. Springer International Publishing. |
BibTeX:
@inbook{2017aCCIW, author = {Lapray, Pierre-Jean and Thomas, Jean-Baptiste and Gouton, Pierre}, editor = {Bianco, Simone and Schettini, Raimondo and Trémeau, Alain and Tominaga, Shoji}, title = {A Database of Spectral Filter Array Images that Combine Visible and NIR}, booktitle = {Computational Color Imaging: 6th International Workshop, CCIW 2017, Milan, Italy, March 29-31, 2017, Proceedings}, publisher = {Springer International Publishing}, year = {2017}, pages = {187--196}, url = {http://jbthomas.org/Conferences/2017aCCIW.pdf}, doi = {10.1007/978-3-319-56010-6_16} } |
El Khoury J, Thomas J-B and Mansouri A (2016), "A Color Image Database for Haze Model and Dehazing Methods Evaluation", In Image and Signal Processing: 7th International Conference, ICISP 2016, Trois-Rivières, QC, Canada, May 30 - June 1, 2016, Proceedings. Cham , pp. 109-117. Springer International Publishing. |
BibTeX:
@inbook{2016ICISP, author = {El Khoury, Jessica and Thomas, Jean-Baptiste and Mansouri, Alamin}, editor = {Mansouri, Alamin and Nouboud, Fathallah and Chalifour, Alain and Mammass, Driss and Meunier, Jean and Elmoataz, Abderrahim}, title = {A Color Image Database for Haze Model and Dehazing Methods Evaluation}, booktitle = {Image and Signal Processing: 7th International Conference, ICISP 2016, Trois-Rivières, QC, Canada, May 30 - June 1, 2016, Proceedings}, publisher = {Springer International Publishing}, year = {2016}, pages = {109--117}, url = {http://jbthomas.org/Conferences/2016MCS.pdf}, doi = {10.1007/978-3-319-33618-3_12} } |
Sadeghipoor Z, Thomas J-B and Susstrunk S (2016), "Demultiplexing visible and Near-Infrared Information in single-sensor multispectral imaging", Color and Imaging Conference. Vol. 2016(2016), pp. xx-xx. |
Abstract: SFA |
BibTeX:
@article{2016CIC, author = {Sadeghipoor, Zahra and Thomas, Jean-Baptiste and Susstrunk, Sabine}, title = {Demultiplexing visible and Near-Infrared Information in single-sensor multispectral imaging}, journal = {Color and Imaging Conference}, year = {2016}, volume = {2016}, number = {2016}, pages = {xx-xx}, url = {http://jbthomas.org/Conferences/2016CIC.pdf} } |
Thomas J-B (2015), "Illuminant estimation from uncalibrated multispectral images", In Colour and Visual Computing Symposium (CVCS), 2015., Aug, 2015. , pp. 1-6. |
Abstract: We investigate the physical validity of typical computational color constancy models for illuminant estimation of uncalibrated multispectral images. We demonstrate empirically that the assumptions may be reasonable and that we retrieve reasonably well the illumination for some images. On these images, we also have access to a good estimate of the spectral properties of the illumination while increasing the number of bands. However, some other images do not provide a very good illuminant estimation. We also show that the result depends mostly on the scene, rather than on the hypothesis made or on the number of spectral bands. Besides, the influence of the algorithm and its hypothesis is more critical for more bands than for the 3 - D color case. |
BibTeX:
@inproceedings{2015CVCS, author = {Thomas, Jean-Baptiste}, title = {Illuminant estimation from uncalibrated multispectral images}, booktitle = {Colour and Visual Computing Symposium (CVCS), 2015}, year = {2015}, pages = {1-6}, url = {http://jbthomas.org/Conferences/2015CVCS.pdf}, doi = {10.1109/CVCS.2015.7274900} } |
Zhao P, Pedersen M, Hardeberg JY and Thomas J-B (2015), "Measuring the Relative Image Contrast of Projection Displays", Color and Imaging Conference. Vol. 2015(1), pp. 79-91. |
Abstract: Projection displays, compared to other modern display technologies, have many unique advantages. However, the image quality assessment of projection displays has not been well studied so far. In this paper, we propose an objective approach to measure the relative contrast of projection
displays based on the pictures taken with a calibrated digital camera in a dark room where the projector is the only light source. A set of carefully selected natural images is modified to generate multiple levels of image contrast. In order to enhance the validity, reliability, and robustness of our research, we performed the experiments in similar viewing conditions at two separate geographical locations with different projection displays. In each location, we had a group of observers to give perceptual ratings. Further, we adopted state-of-art contrast measures to evaluate the relative contrast of the acquired images. The experimental results suggest that the Michelson contrast measure performs the worst, as expected, while other global contrast measures perform relatively better, but they have less correlation with the perceptual ratings than local contrast measures. The local contrast measures perform better than global contrast measures for all test images, but all contrast measures failed on the test images with low luminance or dominant colors and without texture areas. In addition, the high correlations between the experimental results for the two projections displays indicate that our proposed assessment approach is valid, reliable, and consistent. c 2015 Society for Imaging Science and Technology. |
BibTeX:
@article{2015CIC, author = {Zhao, Ping and Pedersen, Marius and Hardeberg, Jon Yngve and Thomas, Jean-Baptiste}, title = {Measuring the Relative Image Contrast of Projection Displays}, journal = {Color and Imaging Conference}, year = {2015}, volume = {2015}, number = {1}, pages = {79-91}, url = {http://jbthomas.org/Journals/2015JIST.pdf} } |
Wang X, Green PJ, Thomas J-B, Hardeberg JY and Gouton P (2015), "Computational Color Imaging: 5th International Workshop, CCIW 2015, Saint Etienne, France, March 24-26, 2015, Proceedings" Cham , pp. 181-191. Springer International Publishing. |
BibTeX:
@inbook{2015CCIW, author = {Wang, Xingbo and Green, Philip J. and Thomas, Jean-Baptiste and Hardeberg, Jon Y. and Gouton, Pierre}, editor = {Trémeau, Alain and Schettini, Raimondo and Tominaga, Shoji}, title = {Computational Color Imaging: 5th International Workshop, CCIW 2015, Saint Etienne, France, March 24-26, 2015, Proceedings}, publisher = {Springer International Publishing}, year = {2015}, pages = {181--191}, url = {http://jbthomas.org/Conferences/2015CCIW.pdf}, doi = {10.1007/978-3-319-15979-9_18} } |
El Khoury J, Thomas J-B and Mansouri A (2015), "Haze and convergence models: Experimental comparison", In AIC 2015. Tokyo, Japan, May, 2015. |
BibTeX:
@inproceedings{2015AIC, author = {El Khoury, Jessica and Thomas, Jean-Baptiste and Mansouri, Alamin}, title = {Haze and convergence models: Experimental comparison}, booktitle = {AIC 2015}, year = {2015}, url = {http://jbthomas.org/Conferences/2015AIC.pdf} } |
Benezeth Y, Sidibé D and Thomas J-B (2014), "Background subtraction with multispectral video sequences", In IEEE International Conference on Robotics and Automation workshop on Non-classical Cameras, Camera Networks and Omnidirectional Vision (OMNIVIS). , pp. 6-p. |
Abstract: Motion analysis of moving targets is an important
issue in several applications such as video surveillance or robotics. Background subtraction is one of the simplest and widely used techniques for moving target detection in video sequences. In this paper, we investigate the advantages of using a multispectral video acquisition system of more than three bands for background subtraction over the use of trichromatic or monochromatic video sequences. To this end, we have established a dataset of multispectral videos with a manual annotation of moving objects. To the best of our knowledge, this is the first publicly available dataset of multispectral video sequences. Experimental results indicate that using more than three spectral sub-bands provide better discrimination between foreground and background pixels. In particular, the use of the near infra-red (NIR) spectral band together with visible spectra provides the best results. |
BibTeX:
@inproceedings{2014OMNIVIS, author = {Benezeth, Yannick and Sidibé, Désiré and Thomas, Jean-Baptiste}, title = {Background subtraction with multispectral video sequences}, booktitle = {IEEE International Conference on Robotics and Automation workshop on Non-classical Cameras, Camera Networks and Omnidirectional Vision (OMNIVIS)}, year = {2014}, pages = {6--p}, url = {http://jbthomas.org/Conferences/2014OMNIVIS.pdf} } |
Zhao P, Pedersen M, Hardeberg JY and Thomas JB (2014), "Image registration for quality assessment of projection displays", In 2014 IEEE International Conference on Image Processing (ICIP)., Oct, 2014. , pp. 3488-3492. |
Abstract: In the full reference metric based image quality assessment of projection displays, it is critical to achieve accurate and fully automatic image registration between the captured projection and its reference image in order to establish a subpixel level mapping. The preservation of geometrical order as well as the intensity and chromaticity relationships between two consecutive pixels must be maximized. The existing camera based image registration methods do not meet this requirement well. In this paper, we propose a markerless and view independent method to use an un-calibrated camera to perform the task. The proposed method including three main components: feature extraction, feature expansion and geometric correction, and it can be implemented easily in a fully automatic fashion. The experimental results of both simulation and the one conducted in the field demonstrate that the proposed method is able to achieve image registration accuracy higher than 91% in a dark projection room and above 85% with ambient light lower than 30 Lux. |
BibTeX:
@inproceedings{2014ICIP, author = {P. Zhao and M. Pedersen and J. Y. Hardeberg and J. B. Thomas}, title = {Image registration for quality assessment of projection displays}, booktitle = {2014 IEEE International Conference on Image Processing (ICIP)}, year = {2014}, pages = {3488-3492}, url = {http://jbthomas.org/Conferences/2014ICIP.pdf}, doi = {10.1109/ICIP.2014.7025708} } |
Wang X, Pedersen M and Thomas J-B (2014), "The influence of chromatic aberration on demosaicking", In Visual Information Processing (EUVIP), 2014 5th European Workshop on., Dec, 2014. , pp. 1-6. |
Abstract: The wide deployment of colour imaging devices owes much to the use of colour filter array (CFA). A CFA produces a mosaic image, and normally a subsequent CFA demosaicking algorithm interpolates the mosaic image and estimates the full-resolution colour image. Among various types of optical aberrations from which a mosaic image may suffer, chromatic aberration (CA) influences the spatial and spectral correlation through the artefacts such as blur and mis-registration, which demosaicking also relies on. In this paper we propose a simulation framework aimed at an investigation of the influence of CA on demosaicking. Results show that CA benefits demosaicking to some extent, however CA lowers the quality of resulting images by any means. |
BibTeX:
@inproceedings{2014EUVIP, author = {Wang, Xingbo and Pedersen, Marius and Thomas, Jean-Baptiste}, title = {The influence of chromatic aberration on demosaicking}, booktitle = {Visual Information Processing (EUVIP), 2014 5th European Workshop on}, year = {2014}, pages = {1-6}, url = {http://jbthomas.org/Conferences/2014EUVIP.pdf}, doi = {10.1109/EUVIP.2014.7018410} } |
El Khoury J, Thomas J-B and Alamin M (2014), "Does Dehazing Model Preserve Color Information?", In Signal-Image Technology and Internet-Based Systems (SITIS), 2014 Tenth International Conference on., Nov, 2014. , pp. 606-613. |
Abstract: Image dehazing aims at estimating the image information lost caused by the presence of fog, haze and smoke in the scene during acquisition. Degradation causes a loss in contrast and color information, thus enhancement becomes an inevitable task in imaging applications and consumer photography. Color information has been mostly evaluated perceptually along with quality, but no work adresses specifically this aspect. We demonstrate how dehazing model affects color information on simulated and real images. We use a convergence model from perception of transparency to simulate haze on images. We evaluate color loss in terms of angle of hue in IPT color space, saturation in CIE LUV color space and perceived color difference in CIE LAB color space. Results indicate that saturation is critically changed and hue is changed for achromatic colors and blue/yellow colors, where usual image processing space are not showing constant hue lines. We suggest that a correction model based on color transparency perception could help to retrieve color information as an additive layer on dehazing algorithms. |
BibTeX:
@inproceedings{2014COMI, author = {El Khoury, Jessica and Thomas, Jean-Baptiste and Mansouri Alamin}, title = {Does Dehazing Model Preserve Color Information?}, booktitle = {Signal-Image Technology and Internet-Based Systems (SITIS), 2014 Tenth International Conference on}, year = {2014}, pages = {606-613}, url = {http://jbthomas.org/Conferences/2014CoMI.pdf}, doi = {10.1109/SITIS.2014.78} } |
Lapray P-J, Thomas J-B and Gouton P (2014), "A Multispectral Acquisition System using MSFAs", Color and Imaging Conference. Vol. 2014(2014), pp. 97-102. |
Abstract: Thanks to technical progress in interferential filter design, we can finally implement in practice the concept of Multispectral Filter Array based sensors. This article presents the characteristics of the elements of our sensor as a case study. The spectral characteristics are based
on two different spatial arrangements that distribute eight different bandpass filters in the visible and near-infrared area of the spectrum. We demonstrate that the system is viable and evaluate its performance through sensor spectral simulation and characterization. |
BibTeX:
@article{2014CICb, author = {Lapray, Pierre-Jean and Thomas, Jean-Baptiste and Gouton, Pierre}, title = {A Multispectral Acquisition System using MSFAs}, journal = {Color and Imaging Conference}, year = {2014}, volume = {2014}, number = {2014}, pages = {97-102}, url = {http://jbthomas.org/Conferences/2014bCIC.pdf} } |
Zhao P, Pedersen M, Thomas J-B and Hardeberg JY (2014), "Perceptual Spatial Uniformity Assessment of Projection Displays with a Calibrated Camera", Color and Imaging Conference. Vol. 2014(2014), pp. 159-164. |
Abstract: Spatial uniformity is one of the most important image quality attributes in visual experience of displays. In conventional researches, spatial uniformity was mostly measured with a radiometer and its quality was assessed with non-reference image quality metrics. Cameras are cheaper
than radiometers and they can provide accurate relative measurements if they are carefully calibrated. In this paper, we propose and implement a work-flow to use a calibrated camera as a relative acquisition device of intensity to measure the spatial uniformity of projection displays. The camera intensity transfer functions for every projected pixels are recovered, so we can produce multiple levels of linearized non-uniformity on the screen in the purpose of image quality assessment. The experiment results suggest that our work-flow works well. Besides, none of the frequently referred uniformity metrics correlate well with the perceptual results for all types of test images. The spatial non-uniformity is largely masked by the high frequency components in the displayed image content, and we should simulate the human visual system to ignore the non-uniformity that cannot be discriminated by human observers. The simulation can be implemented using models based on contrast sensitivity functions, contrast masking, etc. |
BibTeX:
@article{2014CICa, author = {Zhao, Ping and Pedersen, Marius and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve}, title = {Perceptual Spatial Uniformity Assessment of Projection Displays with a Calibrated Camera}, journal = {Color and Imaging Conference}, year = {2014}, volume = {2014}, number = {2014}, pages = {159-164}, url = {http://jbthomas.org/Conferences/2014aCIC.pdf} } |
Zhao P, Pedersen M, Hardeberg JY and Thomas J-B (2013), "Camera-based measurement of relative image contrast in projection displays", In Visual Information Processing (EUVIP), 2013 4th European Workshop on., June, 2013. , pp. 112-117. |
Abstract: This research investigated the measured contrast of projection displays based on pictures taken by un-calibrated digital cameras under typical viewing conditions. A high-end radiometer was employed as a reference to the physical response of projection luminance. Checkerboard, gray scale and color complex test images with a range of the projector's brightness and contrast settings were projected. Two local and two global contrast metrics were evaluated on the acquired pictures. We used contrast surface plots and Pearson correlation to investigate the measured contrast versus the projector's brightness and contrast settings. The results suggested, as expected, the projector contrast has a more significant impact on measured contrast than projector brightness, but the measured contrast based on either camera or radiometer has a nonlinear relationship with projector settings. The results also suggested that simple statistics based metrics might produce a higher Pearson correlation value with both projector contrast and projector brightness than more complex contrast metrics. Our results demonstrated that the rank order of un-calibrated camera based measured contrast and radiometer based measured contrast is preserved for large steps of projector setting differences. |
BibTeX:
@inproceedings{2013EUVIP, author = {Zhao, Ping and Pedersen, Marius and Hardeberg, Jon Yngve and Thomas, Jean-Baptiste}, title = {Camera-based measurement of relative image contrast in projection displays}, booktitle = {Visual Information Processing (EUVIP), 2013 4th European Workshop on}, year = {2013}, pages = {112-117}, url = {http://jbthomas.org/Conferences/2013EUVIP.pdf} } |
Wang X, Thomas J-B, Hardeberg JY and Gouton P (2013), "Median filtering in multispectral filter array demosaicking", Proc. SPIE. Vol. 8660, pp. 86600E-86600E-10. |
Abstract: Inspired by the concept of the colour filter array (CFA), the research community has shown much interest in
adapting the idea of CFA to the multispectral domain, producing multispectral filter arrays (MSFAs). In addition to newly devised methods of MSFA demosaicking, there exists a wide spectrum of methods developed for CFA. Among others, some vector based operations can be adapted naturally for multispectral purposes. In this paper, we focused on studying two vector based median filtering methods in the context of MSFA demosaicking. One solves demosaicking problems by means of vector median filters, and the other applies median filtering to the demosaicked image in spherical space as a subsequent refinement process to reduce artefacts introduced by demosaicking. To evaluate the performance of these measures, a tool kit was constructed with the capability of mosaicking, demosaicking and quality assessment. The experimental results demonstrated that the vector median filtering performed less well for natural images except black and white images, however the refinement step reduced the reproduction error numerically in most cases. This proved the feasibility of extending CFA demosaicking into MSFA domain. |
BibTeX:
@inproceedings{2013EI, author = {Wang, Xingbo and Thomas, Jean-Baptiste and Hardeberg, Jon Y. and Gouton, Pierre}, title = {Median filtering in multispectral filter array demosaicking}, journal = {Proc. SPIE}, year = {2013}, volume = {8660}, pages = {86600E-86600E-10}, url = {http://jbthomas.org/Conferences/2013EI.pdf}, doi = {10.1117/12.2005256} } |
Wang X, Thomas J-B, Hardeberg J and Gouton P (2013), "Discrete wavelet transform based multispectral filter array demosaicking", In Colour and Visual Computing Symposium (CVCS), 2013., Sept, 2013. , pp. 1-6. |
Abstract: The idea of colour filter array may be adapted to multispectral image acquisition by integrating more filter types into the array, and developing associated demosaicking algorithms. Several methods employing discrete wavelet transform (DWT) have been proposed for CFA demosaicking. In this work, we put forward an extended use of DWT for multispectral filter array demosaicking. The extension seemed straightforward, however we observed striking results. This work contributes to better understanding of the issue by demonstrating that spectral correlation and spatial resolution of the images exerts a crucial influence on the performance of DWT based demosaicking. |
BibTeX:
@inproceedings{2013CVCSb, author = {Xingbo Wang and Thomas, J.-B. and Hardeberg, J.Y. and Gouton, P.}, title = {Discrete wavelet transform based multispectral filter array demosaicking}, booktitle = {Colour and Visual Computing Symposium (CVCS), 2013}, year = {2013}, pages = {1-6}, url = {http://jbthomas.org/Conferences/2013bCVCS.pdf}, doi = {10.1109/CVCS.2013.6626274} } |
Peguillet H, Thomas J-B, Gouton P and Ruichek Y (2013), "Energy balance in single exposure multispectral sensors", In Colour and Visual Computing Symposium (CVCS), 2013., Sept, 2013. , pp. 1-6. |
Abstract: Recent simulations of multispectral sensors are based on a simple Gaussian model, which includes filters transmittance and substrate absorption. In this paper we want to make the distinction between these two layers. We discuss the balance of energy by channel in multispectral solid state sensors and propose an updated simple Gaussian model to simulate multispectral sensors. Results are based on simulation of typical sensor configurations. |
BibTeX:
@inproceedings{2013CVCSa, author = {Peguillet, Hugues and Thomas, Jean-Baptiste and Gouton, Pierre and Ruichek, Yassine}, title = {Energy balance in single exposure multispectral sensors}, booktitle = {Colour and Visual Computing Symposium (CVCS), 2013}, year = {2013}, pages = {1-6}, url = {http://jbthomas.org/Conferences/2013aCVCS.pdf}, doi = {10.1109/CVCS.2013.6626277} } |
Thomas J-B, Colantoni P and Trémeau A (2013), "Computational Color Imaging: 4th International Workshop, CCIW 2013, Chiba, Japan, March 3-5, 2013. Proceedings" Berlin, Heidelberg , pp. 53-67. Springer Berlin Heidelberg. |
BibTeX:
@inbook{2013CCIW, author = {Thomas, Jean-Baptiste and Colantoni, Philippe and Trémeau, Alain}, editor = {Tominaga, Shoji and Schettini, Raimondo and Trémeau, Alain}, title = {Computational Color Imaging: 4th International Workshop, CCIW 2013, Chiba, Japan, March 3-5, 2013. Proceedings}, publisher = {Springer Berlin Heidelberg}, year = {2013}, pages = {53--67}, url = {http://jbthomas.org/Conferences/2013CCIW.pdf}, doi = {10.1007/978-3-642-36700-7_5} } |
Wang X, Thomas J-B, Hardeberg JY and Gouton P (2013), "A Study on the Impact of Spectral Characteristics of Filters on Multispectral Image Acquisition", In Proceedings of AIC Colour 2013. Gateshead, Royaume-Uni, July, 2013. Vol. 4, pp. 1765-1768. |
Abstract: In every aspect, filter design plays an important role in an image acquisition system based on a single image sensor and a colour filter array (CFA) mounted onto the sensor. Complementary CFAs are used by some colour cameras in the interest of higher sensitivity, which motivated us to employ filters of wide pass bands in the effort to adapt CFA for multispectral image acquisition. In this context, filter design has an effect on the accuracy of spectrum reconstruction in addition to other aspects. The results show that wider bandwidths in general result in more faithful spectrum reconstruction and higher signal-to-noise performance. |
BibTeX:
@inproceedings{2013AIC, author = {Wang, Xingbo and Thomas, Jean-Baptiste and Hardeberg, Jon Yngve and Gouton, Pierre}, editor = {Lindsay MacDonald, Stephen Westland, Sophie Wuerger}, title = {A Study on the Impact of Spectral Characteristics of Filters on Multispectral Image Acquisition}, booktitle = {Proceedings of AIC Colour 2013}, year = {2013}, volume = {4}, pages = {1765-1768}, url = {http://jbthomas.org/Conferences/2013AIC.pdf} } |
Thomas J-B and Gerhardt J (2012), "Webcam based display calibration", Color and Imaging Conference. Vol. 2012(1), pp. 82-87. |
Abstract: We present an automatic method for measuring the tone response curve of display devices based on visual methods, where the eye is replaced by an end-user, uncalibrated camera, such as a webcam. Our approach compares a series of halftoned patches of known covering ratio with a continuous
series of tone patches for each ratio. Both patches are shot by a camera that is used as a virtual eye to evaluate the luminance difference. By an iterative process, the continuous tone value is adjusted while compared with the perceived level of the halftoned patch. When the camera does not see any difference between the patches or a minimal difference, the luminance level of the continuous patch corresponds to the relative luminance of the halftoned patch covering ratio. We demonstrate that the method is as accurate as an equivalent visual method. The advantage of using a camera over the human eye is due to the limitation of observer variability while performing visual tasks. |
BibTeX:
@article{2012CIC, author = {Thomas, Jean-Baptiste and Gerhardt, Jeremie}, title = {Webcam based display calibration}, journal = {Color and Imaging Conference}, year = {2012}, volume = {2012}, number = {1}, pages = {82-87}, url = {http://jbthomas.org/Conferences/2012CIC.pdf} } |
Thomas J-B and Boust C (2011), "Colorimetric Characterization of a Positive Film Scanner Using an Extremely Reduced Training Data Set", Color and Imaging Conference. Vol. 2011(1), pp. 152-155. |
Abstract: In this work, we address the problem of having an accurate colorimetric characterization of a scanner for traditional positive film in order to guarantee the accuracy of the color information during the digitization of a movie. The scanning of a positive film is not an usual task, however
it can happen for cultural heritage purpose. Art-movies, are often created and stored as positive-film in museums. One of the problems one can face for a colorimetric characterization is to have a reasonable number of measurements from an item. In this work we succeeded in having a reasonable accuracy with just a few number of measurement (typically 4 to 7 E*ab units with 2 to less than 10 measurements). |
BibTeX:
@article{2011CIC, author = {Thomas, Jean-Baptiste and Boust, Clotilde}, title = {Colorimetric Characterization of a Positive Film Scanner Using an Extremely Reduced Training Data Set}, journal = {Color and Imaging Conference}, year = {2011}, volume = {2011}, number = {1}, pages = {152-155}, url = {http://jbthomas.org/Conferences/2011CIC.pdf} } |
Colantoni P, Thomas J-B and Pillay R (2010), "Graph-based 3D Visualization of Color Content in Paintings", In VAST: International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage - Short and Project Papers. The Eurographics Association. |
BibTeX:
@incollection{2010VAST, author = {Colantoni, Philippe and Thomas, Jean-Baptiste and Pillay, Ruven}, editor = {Alessandro Artusi and Morwena Joly and Genevieve Lucet and Denis Pitzalis and Alejandro Ribes}, title = {Graph-based 3D Visualization of Color Content in Paintings}, booktitle = {VAST: International Symposium on Virtual Reality, Archaeology and Intelligent Cultural Heritage - Short and Project Papers}, publisher = {The Eurographics Association}, year = {2010}, url = {http://jbthomas.org/Conferences/2010VAST.pdf}, doi = {10.2312/PE/VAST/VAST10S/025-030} } |
Thomas J-B (2010), "Controlling color in display: A discussion on quality", CREATE. |
Abstract: Display and Quality |
BibTeX:
@article{2010CREATE, author = {Thomas, Jean-Baptiste}, title = {Controlling color in display: A discussion on quality}, journal = {CREATE}, year = {2010}, url = {http://jbthomas.org/Conferences/2010CREATE.pdf} } |
Gerhardt J and Thomas J-B (2010), "Toward an automatic color calibration for 3D displays", Color and Imaging Conference. Vol. 2010(1), pp. 5-10. |
Abstract: This article considers the color correction of a 3D projection display installation. The system consists of a pair of projectors of the same model modified by INFITECGmbH such that they can be used for projection of 3D contents. The goal of this color correction is to reduce
the difference between the two mo-dified projectors such as the color difference between them does not disturb the user. Two new approaches are proposed and compared with the Infitec expert correction. One is based on an objective colorimetric match, the other on the optimization of a transform considering the color difference between the two signals. |
BibTeX:
@article{2010CIC, author = {Gerhardt, Jeremie and Thomas, Jean-Baptiste}, title = {Toward an automatic color calibration for 3D displays}, journal = {Color and Imaging Conference}, year = {2010}, volume = {2010}, number = {1}, pages = {5-10}, url = {http://jbthomas.org/Conferences/2010CIC.pdf} } |
Colantoni P and Thomas J-B (2009), "A Color Management Process for Real Time Color Reconstruction of Multispectral Images", In Image Analysis. Berlin, Heidelberg , pp. 128-137. Springer Berlin Heidelberg. |
Abstract: We introduce a new accurate and technology independent display color characterization model for color rendering of multispectral images. The establishment of this model is automatic, and does not exceed the time of a coffee break to be efficient in a practical situation. This model is a part of the color management workflow of the new tools designed at the C2RMF for multispectral image analysis of paintings acquired with the material developed during the CRISATEL European project. The analysis is based on color reconstruction with virtual illuminants and use a GPU (Graphics processor unit) based processing model in order to interact in real time with a virtual lighting. |
BibTeX:
@inproceedings{2009SCIA, author = {Colantoni, Philippe and Thomas, Jean-Baptiste}, editor = {Salberg, Arnt-Børre and Hardeberg, Jon Yngve and Jenssen, Robert}, title = {A Color Management Process for Real Time Color Reconstruction of Multispectral Images}, booktitle = {Image Analysis}, publisher = {Springer Berlin Heidelberg}, year = {2009}, pages = {128--137}, url = {http://jbthomas.org/Conferences/2009SCIA.pdf}, doi = {10.1007/978-3-642-02230-2_14} } |
Bakke AM, Thomas J-B and Gerhardt J (2009), "Common assumptions in color characterization of projectors", GCIS'09. (3), pp. 50-55. |
Abstract: Projection system spatial uniformity. |
BibTeX:
@inproceedings{2009GCIS, author = {Bakke, Arne Magnus and Thomas, Jean-Baptiste and Gerhardt, Jeremie}, title = {Common assumptions in color characterization of projectors}, journal = {GCIS'09}, year = {2009}, number = {3}, pages = {50-55}, url = {http://jbthomas.org/Conferences/2009GCIS.pdf} } |
Thomas J-B and Bakke AM (2009), "Computational Color Imaging: Second International Workshop, CCIW 2009, Saint-Etienne, France, March 26-27, 2009. Revised Selected Papers" Berlin, Heidelberg , pp. 160-169. Springer Berlin Heidelberg. |
BibTeX:
@inproceedings{2009CCIW, author = {Thomas, Jean-Baptiste and Bakke, Arne Magnus}, editor = {Trémeau, Alain and Schettini, Raimondo and Tominaga, Shoji}, title = {Computational Color Imaging: Second International Workshop, CCIW 2009, Saint-Etienne, France, March 26-27, 2009. Revised Selected Papers}, publisher = {Springer Berlin Heidelberg}, year = {2009}, pages = {160--169}, url = {http://jbthomas.org/Conferences/2009CCIW.pdf}, doi = {10.1007/978-3-642-03265-3_17} } |
Thomas J-B, Colantoni P, Hardeberg JY, Foucherot I and Gouton P (2008), "An inverse display color characterization model based on an optimized geometrical structure", Proc. SPIE. Vol. 6807, pp. 68070A-68070A-12. |
Abstract: We have defined an inverse model for colorimetric characterization of additive displays. It is based on an optimized three-dimensional tetrahedral structure. In order to minimize the number of measurements, the structure is defined using a forward characterization model. Defining a regular grid in the device-dependent destination color space leads to heterogeneous interpolation errors in the device-independent source color space. The parameters of the function used to define the grid are optimized using a globalized Nelder-Mead simplex downhill algorithm. Several cost functions are tested on several devices. We have performed experiments with a forward model which assumes variation in chromaticities (PLVC), based on one-dimensional interpolations for each primary ramp along X, Y and Z (3×3×1-D ). Results on 4 devices (2 LCD and a DLP projection devices, one LCD monitor) are shown and discussed. |
BibTeX:
@inproceedings{2008EI, author = {Thomas, Jean-Baptiste and Colantoni, Philippe and Hardeberg, Jon Y. and Foucherot, Irène and Gouton, Pierre}, title = {An inverse display color characterization model based on an optimized geometrical structure}, journal = {Proc. SPIE}, year = {2008}, volume = {6807}, pages = {68070A-68070A-12}, url = {http://jbthomas.org/Conferences/2008EI.pdf}, doi = {10.1117/12.766487} } |
Mikalsen EB, Hardeberg JY and Thomas J-B (2008), "Verification and extension of a camera-based end-user calibration method for projection displays", Conference on Colour in Graphics, Imaging, and Vision. Vol. 2008(1), pp. 575-579. |
Abstract: We evaluate, analyse and propose improvements to a previously published end-user calibration method for projection devices (Bala and Braun, CIC 2006). We focus on the estimation of the displays tone response curve, using only an uncalibrated consumercamera. The results show that the
method is accurate, depending on both the projector and the camera used. We found that the method is accurate enough for most end-user applications. A weakness of this method is the wrong estimation of the projectors black level, which significantly affects the estimation of the camera response curve. |
BibTeX:
@article{2008CGIV, author = {Mikalsen, Espen Bårdsnes and Hardeberg, Jon Y. and Thomas, Jean-Baptiste}, title = {Verification and extension of a camera-based end-user calibration method for projection displays}, journal = {Conference on Colour in Graphics, Imaging, and Vision}, year = {2008}, volume = {2008}, number = {1}, pages = {575-579}, url = {http://jbthomas.org/Conferences/2008CGIV.pdf} } |
Thomas J-B, Hardeberg J, Foucherot I and Gouton P (2007), "Additivity Based LC Display Color Characterization", GCIS'07. (2), pp. 50-55. |
Abstract: PLVC model. |
BibTeX:
@inproceedings{2007GCIS, author = {Thomas, Jean-Baptiste and Hardeberg, Jon and Foucherot, Irene and Gouton, Pierre}, title = {Additivity Based LC Display Color Characterization}, journal = {GCIS'07}, year = {2007}, number = {2}, pages = {50-55}, url = {http://jbthomas.org/Conferences/2007GCIS.pdf} } |
Thomas J-B, Chareyron G and Trémeau A (2007), "Image watermarking based on a color quantization process", Proc. SPIE. Vol. 6506, pp. 650603-650603-12. |
Abstract: The purpose of this paper is to propose a color image watermarking scheme based on an image dependent color gamut sampling of the L*a*b* color space. The main motivation of this work is to control the reproduction of color images on different output devices in order to have the same color feeling, coupling intrinsic informations on the image gamut and output device calibration. This paper is focused firstly on the research of an optimal LUT (Look Up Table) which both circumscribes the color gamut of the studied image and samples the color distribution of this image. This LUT is next embedded in the image as a secret message. The principle of the watermarking scheme is to modify the pixel value of the host image without causing any change neither in image appearance nor on the shape of the image gamut. |
BibTeX:
@inproceedings{2007EI, author = {Thomas, Jean-Baptiste and Chareyron, Gael and Trémeau, Alain}, title = {Image watermarking based on a color quantization process}, journal = {Proc. SPIE}, year = {2007}, volume = {6506}, pages = {650603-650603-12}, url = {http://jbthomas.org/Conferences/2007EI.pdf}, doi = {10.1117/12.702010} } |
Thomas J-B and Tremeau A (2007), "A Gamut Preserving Color Image Quantization", In Image Analysis and Processing Workshops, 2007. ICIAPW 2007. 14th International Conference on., Sept, 2007. , pp. 221-226. |
Abstract: We propose a new approach for color image quantization which preserves the shape of the color gamut of the studied image. Quantization consists to find a set of color representative of the color distribution of the image. We are looking here for an optimal LUT (look up table) which contains information on the image's gamut and on the color distribution of this image. The main motivation of this work is to control the reproduction of color images on different output devices in order to have the same color feeling, coupling intrinsic informations on the image gamut and output device calibration. We have developped a color quantization algorithm based on an image dependant sampling of the CIELAB color space. This approach overcomes classical approaches. |
BibTeX:
@inproceedings{2007CCIW, author = {Thomas, Jean-Baptiste and Tremeau, Alain}, title = {A Gamut Preserving Color Image Quantization}, booktitle = {Image Analysis and Processing Workshops, 2007. ICIAPW 2007. 14th International Conference on}, year = {2007}, pages = {221-226}, url = {http://jbthomas.org/Conferences/2007CCIW.pdf}, doi = {10.1109/ICIAPW.2007.6} } |
Gigilashvili D and Thomas J-B (2023), "Appearance Beyond Colour", In Fundamentals and Applications of Colour Engineering. , pp. 239-257. John Wiley & Sons, Ltd. |
Abstract: Appearance Beyond Colour − Gloss and Translucency Appearance Beyond Colour − Gloss and Translucency Perception Colour alone is not sufficient to communicate and reproduce how objects and materials look. While colours are usually measured locally, for a spot, real-life objects are characterized by spatially and temporally varying colour. The regularities of spatio-temporal variation of colour provide information about specular reflection of light, transmission of light through subsurface, and spatial variation of surface reflectance and geometry. These types of spatio-temporal variation of colour are described by gloss, translucency, and texture, respectively – and along with colour constitute what is usually called basic appearance attributes. While perception and reproduction of colour are relatively well-understood, the research on perception of other basic appearance attributes is in its infancy. Little is known about the physiological mechanisms of gloss, translucency and texture perception, both on the retinal, as well as cortical level. No standard observer is defined and no robust appearance models exist for those attributes. This has a considerable implication for current colour technologies. While an immense amount of work has been done for accurate cross-media colour reproduction, the generation of the desired look of complex, real-life objects requires reproduction of gloss, translucency, and texture as well. For instance, for a 3D printed prosthesis of a human limb to look realistic, not only colour, but also gloss, translucency, and texture should match with those of real human skin. This chapter summarizes the knowledge status on the perception of gloss and translucency by the human visual system, identifies current challenges, and discusses the implications they have for future colour imaging and manufacturing technologies. Analysis of texture perception is beyond the scope of this chapter. |
BibTeX:
@inbook{ThomasBook2023, author = {Gigilashvili, Davit and Thomas, Jean-Baptiste}, title = {Appearance Beyond Colour}, booktitle = {Fundamentals and Applications of Colour Engineering}, publisher = {John Wiley & Sons, Ltd}, year = {2023}, pages = {239-257}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/9781119827214.ch14}, doi = {10.1002/9781119827214.ch14} } |
Thomas J-B (2018), "Multispectral imaging for computer vision", In Habilitation à diriger des recherches., September, 2018. Université de Bourgogne, Franche-Comté. |
BibTeX:
@incollection{ThomasHDR2018, author = {Thomas, Jean-Baptiste}, title = {Multispectral imaging for computer vision}, booktitle = {Habilitation à diriger des recherches}, publisher = {Université de Bourgogne, Franche-Comté}, year = {2018}, url = {http://jbthomas.org/Thesis/2018HDRThesisCompactVersion.pdf} } |
Nozick V and Thomas J-B (2013), "Camera Calibration: Geometric and Colorimetric Correction", In 3D Video. , pp. 91-112. John Wiley & Sons, Inc.. |
Abstract: This chapter analyzes camera calibration from a geometric and colorimetric perspective. The first part of the chapter introduces the mathematical model that describes a camera such as position, orientation, focal, as well as its applications such as 3D reconstructions. It analyzes different geometric processes for stereoscopic images such as corrections of radial distortion as well as image rectification. The second part of the chapter focuses on colorimetric models relating to digital acquisition systems. It demonstrates how to characterize a colorimetric camera and then analyze different elements involved in color correction for images taken from a system of cameras. |
BibTeX:
@inbook{ThomasBook2013c, author = {Nozick, Vincent and Thomas, Jean-Baptiste}, title = {Camera Calibration: Geometric and Colorimetric Correction}, booktitle = {3D Video}, publisher = {John Wiley & Sons, Inc.}, year = {2013}, pages = {91--112}, url = {http://dx.doi.org/10.1002/9781118761915.ch5}, doi = {10.1002/9781118761915.ch5} } |
Nozick V and Thomas J-B (2013), "Calibration et Rectification", In Vidéo 3D., October, 2013. , pp. 105-124. Hermès. |
BibTeX:
@inbook{ThomasBook2013b, author = {Vincent Nozick and Jean-Baptiste Thomas}, editor = {Laurent Lucas, Celine Loscos and Yannick REMION}, title = {Calibration et Rectification}, booktitle = {Vidéo 3D}, publisher = {Hermès}, year = {2013}, pages = {105-124}, url = {http://hal-upec-upem.archives-ouvertes.fr/hal-00844453} } |
Thomas J-B, Hardeberg J and Trémeau A (2013), "Cross-Media Color Reproduction and Display Characterization", In Advanced Color Image Processing and Analysis. , pp. 81-118. Springer New York. |
BibTeX:
@incollection{ThomasBook2013a, author = {Thomas, Jean-Baptiste and Hardeberg, JonY. and Trémeau, Alain}, editor = {Fernandez-Maloigne, Christine}, title = {Cross-Media Color Reproduction and Display Characterization}, booktitle = {Advanced Color Image Processing and Analysis}, publisher = {Springer New York}, year = {2013}, pages = {81-118}, url = {http://dx.doi.org/10.1007/978-1-4419-6190-7_4}, doi = {10.1007/978-1-4419-6190-7_4} } |
Thomas J-B (2009), "Colorimetric characterization of displays and multi-display systems" PhD. |
BibTeX:
@incollection{Thomasthesis2009, author = {Thomas, Jean-Baptiste}, editor = {Université de Bourgogne}, title = {Colorimetric characterization of displays and multi-display systems}, publisher = {PhD}, year = {2009}, url = {https://jbthomas.org/Thesis/thesis-2009.pdf} } |
Thomas J-B (2024), "Recent developments in spectral filter array based systems", In ICC Spectral Imaging Experts' Day. Norway, September, 2024. |
BibTeX:
@inproceedings{thomasTalkICC2024, author = {Thomas, Jean-Baptiste}, title = {Recent developments in spectral filter array based systems}, booktitle = {ICC Spectral Imaging Experts' Day}, year = {2024}, url = {https://jbthomas.org/TechReport/ICC-Expert-Day-2024.pdf} } |
Thomas J-B (2022), "Standardization of spectral imaging: What is the RGB of spectral images?", In Computational Colour Imaging Workshop, invited keynote. Online, June, 2022. |
BibTeX:
@inproceedings{thomasTalkCCIW2022, author = {Thomas, Jean-Baptiste}, title = {Standardization of spectral imaging: What is the RGB of spectral images?}, booktitle = {Computational Colour Imaging Workshop, invited keynote}, year = {2022}, url = {http://jbthomas.org/TechReport/CCIW-Keynote-20220610.pdf} } |
Thomas J-B (2020), "Introduction to Colour Imaging", In ITN APPEARS training event, invited talk. Gjøvik, Norway, Feb, 2020. |
BibTeX:
@inproceedings{thomasTalkITN2020, author = {Thomas, Jean-Baptiste}, title = {Introduction to Colour Imaging}, booktitle = {ITN APPEARS training event, invited talk}, year = {2020}, url = {http://jbthomas.org/TechReport/ITN-introColourImaging20200217.pdf} } |
Thomas J-B (2019), "Qualitative research on the appearance of the Plastique collection", In Forum Farge. Trondheim, Norway, May, 2019. |
BibTeX:
@inproceedings{thomasTalkFF2019, author = {Thomas, Jean-Baptiste}, title = {Qualitative research on the appearance of the Plastique collection}, booktitle = {Forum Farge}, year = {2019}, note = {Invited talk to Seminar om farger som materiale - Forum Farge i Trondheim.}, url = {http://jbthomas.org/TechReport/ForumFarge20190503.pdf} } |
Thomas J-B (2018), "From spectral imaging to material appearance", In Habilitation à diriger des recherches. Dijon, France, September, 2018. |
BibTeX:
@inproceedings{thomasTalkHDR2018, author = {Thomas, Jean-Baptiste}, title = {From spectral imaging to material appearance}, booktitle = {Habilitation à diriger des recherches}, year = {2018}, note = {Présentation pour l'obtention de l'Habilitation à diriger des recherches}, url = {http://jbthomas.org/TechReport/HDRPresentation2018.pdf} } |
Thomas J-B (2018), "On the communication of material appearance", In Réunion GDR-ISIS "Géométrie et représentation de la couleur", en partenariat avec la journée "Modèles corticaux de perception visuelle et applications à l'imagerie". Paris, France, November, 2018. |
BibTeX:
@inproceedings{thomasTalkGDR2018, author = {Thomas, Jean-Baptiste}, title = {On the communication of material appearance}, booktitle = {Réunion GDR-ISIS "Géométrie et représentation de la couleur", en partenariat avec la journée "Modèles corticaux de perception visuelle et applications à l'imagerie"}, year = {2018}, note = {Campus Jussieu, 21-22 Novembre 2018}, url = {http://jbthomas.org/TechReport/GDR-ISIS-2018.pdf} } |
Thomas J-B (2018), "Quantifying appearance", In Forum Farge. Bergen, Norway, March, 2018. |
BibTeX:
@inproceedings{thomasTalk2018, author = {Thomas, Jean-Baptiste}, title = {Quantifying appearance}, booktitle = {Forum Farge}, year = {2018}, note = {Invited talk to Seminar om farger og materialitet - Forum Farge i Bergen.}, url = {http://jbthomas.org/TechReport/ForumFarge20180321.pdf} } |
Thomas J-B (2018), "Spectral Filter Array Cameras", Dagstuhl Reports, HMM Imaging: Acquisition, Algorithms, and Applications (Dagstuhl Seminar 17411). Dagstuhl, Germany Vol. 7(10), pp. 30. Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik. |
BibTeX:
@inproceedings{thomasTalk2017b, author = {Thomas, Jean-Baptiste}, editor = {Gonzalo R. Arce and Richard Bamler and Jon Yngve Hardeberg and Andreas Kolb and Shida Beigpour}, title = {Spectral Filter Array Cameras}, journal = {Dagstuhl Reports, HMM Imaging: Acquisition, Algorithms, and Applications (Dagstuhl Seminar 17411)}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, year = {2018}, volume = {7}, number = {10}, pages = {30}, url = {http://drops.dagstuhl.de/opus/volltexte/2018/8661}, doi = {10.4230/DagRep.7.10.14} } |
Thomas J-B, Monno Y and Lapray P-J (2017), "Spectral Filter Arrays Technology", In Color and Imaging Conference, 25th Color and Imaging Conference, Society for Imaging Science and Technology. Lillehammer, Norway, September, 2017. |
BibTeX:
@inproceedings{thomasTalk2017a, author = {Thomas, Jean-Baptiste and Monno, Yusuke and Lapray, Pierre-Jean}, title = {Spectral Filter Arrays Technology}, booktitle = {Color and Imaging Conference, 25th Color and Imaging Conference, Society for Imaging Science and Technology}, year = {2017}, note = {Adapted from the T2C short course at Color and Imaging Conference, 25th Color and Imaging Conference, Society for Imaging Science and Technology, September 11-15, 2017, Lillehammer, Norway.}, url = {http://jbthomas.org/TechReport/CIC-shortcourseSFA-2017.pdf} } |
Thomas J-B (2016), "MultiSpectral Filter Arrays: Tutorial and prototype definition", November - December, 2016.
[BibTeX] |
BibTeX:
@incollection{ThomasTalk2016d, author = {Thomas, Jean-Baptiste}, editor = {EPFL, Lausanne, NTNU-Gjovik}, title = {MultiSpectral Filter Arrays: Tutorial and prototype definition}, year = {2016} } |
Thomas J-B (2014), "MultiSpectral Filter Arrays: Design and demosaicing", November - December, 2014.
[BibTeX] |
BibTeX:
@incollection{ThomasTalk2014c, author = {Thomas, Jean-Baptiste}, editor = {Guest lecture, LPNC, Grenoble and LISTIC, Annecy}, title = {MultiSpectral Filter Arrays: Design and demosaicing}, year = {2014} } |
Thomas J-B (2014), "Filter array-based spectral imaging: Design choices and practical realization", September, 2014.
[BibTeX] |
BibTeX:
@incollection{ThomasTalk2014b, author = {Thomas, Jean-Baptiste}, editor = {Workshop of the hypercept project #5, Multispectral image capture, processing, and quality}, title = {Filter array-based spectral imaging: Design choices and practical realization}, year = {2014} } |
Thomas J-B (2014), "Sensors based on MultiSpectral Filter Arrays", March, 2014. |
BibTeX:
@incollection{ThomasTalk2014a, author = {Thomas, Jean-Baptiste}, editor = {Invited Talk ORA}, title = {Sensors based on MultiSpectral Filter Arrays}, year = {2014}, url = {http://www.pole-ora.com/pages/projets/OPage_JTIMAGEPROC2014.php} } |
Thomas J-B (2012), "Calibration de caméras couleurs. Rapport technique et références". |
BibTeX:
@misc{ThomasTechReport2012, author = {Thomas, Jean-Baptiste}, title = {Calibration de caméras couleurs. Rapport technique et références}, year = {2012}, url = {http://jbthomas.org/TechReport/cameraCalibration2012.pdf} } |
Thomas J-B, Hardeberg J and Trémeau A (2012), "Draft Report on Cross-Media Color Reproduction and Display Characterization". |
BibTeX:
@misc{ThomasTechReport2010, author = {Thomas, Jean-Baptiste and Hardeberg, JonY. and Trémeau, Alain}, title = {Draft Report on Cross-Media Color Reproduction and Display Characterization}, year = {2012}, url = {http://jbthomas.org/TechReport/displaysCrossMedia2010.pdf} } |
Thomas J-B (2009), "Colorimetric characterization of displays and multi-display systems", November, 2009. |
BibTeX:
@incollection{ThomasTalk2009, author = {Thomas, Jean-Baptiste}, editor = {Invited Talk VISOR Seminar}, title = {Colorimetric characterization of displays and multi-display systems}, year = {2009}, url = {http://jbthomas.org/TechReport/seminarVISOR2009.pdf} } |
Nguyen M (2024), "Image-based Estimation of Physical Correlates of the Visual Appearance of Snow" PhD. |
Abstract: The most common representation of snow is to describe it as a white and cold powder material that is usually found in winter or specific areas of the world. It can also be associated with the Christmas festivities, winter sports and leisure activities such as alpine or cross-country skiing and more recently with global warming issues or the seeking of water on other planets. As human being, the appearance of snow can be distinguished thanks to several visible interactions on the surface of snow but is also influenced by phenomena occurring under its surface and connected to its microstructure.
Capturing visual appearance features of snow is a challenging task as the natural and unstable characteristics of the material often require operating at the limits of the sensor capacities. This thesis aims to utilize image-based methods to acquire various snow correlatives related to visual appearance, gather them, and analyse to try to find links with a potential classification of snow. A first part is dedicated to the investigation of a reflectance model of snow by using hyperspectral cameras to find back values of the snow grain size and the snow grain shape. These estimates can be used to establish a classification of the type of snow. We performed acquisitions in a laboratory with snow samples, and we monitored the evolution of melting snow by obtaining hyperspectral images. From these images, we derived an effective parameter to qualify the contribution of both snow grain size and snow grain shape, although we could not obtain a precise and distinct measurements of these two parameters due to a lack of ground truth data. Secondly, the sparkle of snow is measured from digital images acquired insitu over two winters. Datasets of snow images were established by performing outdoor acquisitions with a DSLR camera. A state-of-the-art algorithm originally designed to measure sparkle was adapted to the case of snow. With a statistical analysis of the results, an attempt at finding a connection between sparkle and categories of snow is made. A classification seems possible, but further investigations with an expert and precise labelling should be operated to confirm this theory. Finally, an inversion method is designed to obtain estimates of absorption and scattering properties of highly diffuse materials with a single reflectance measurement. After being tested and validated on dairy products, an in-situ campaign was operated during a winter by taking the measuring device outside. The results obtained from this study confirm the absorption and scattering properties of snow, while opening new perspectives for the virtual rendering of this material. The work achieved gives contributions to various research areas, all connected with imaging methods. In addition, we started to link our results from different correlates with intrinsic parameters of snow such as grain size and grain shape, thus paving the way for extension of research in this field. |
BibTeX:
@incollection{Nguyenthesis2024, author = {Nguyen, Mathieu}, editor = {NTNU}, title = {Image-based Estimation of Physical Correlates of the Visual Appearance of Snow}, publisher = {PhD}, year = {2024}, url = {http://jbthomas.org/SupervisedPhD/2024MathieuNguyen.pdf} } |
Grillini F (2023), "Reflectance imaging spectroscopy: Fusion of VNIR and SWIR for Cultural Heritage analysis" PhD. |
Abstract: Reflectance Imaging Spectroscopy, often referred to as hyperspectral imaging, is an imaging technique that enables the simultaneous capture of spatial and spectral information from a scene without physical contact and in a non-invasive manner. These desirable features make it especially well-suited for applications in Cultural Heritage analysis, where the investigation of historical artifacts should avoid causing irreversible damage.
This thesis is about the revisiting of the imaging pipeline from data acquisition to the processing steps that fuse two independent hyperspectral images captured in separate spectral ranges. The need to address this topic comes from the fact that Visible Near-Infrared (VNIR) and Short-Wave Infrared (SWIR) imaging spectroscopy are being consistently deployed in the field of Cultural Heritage to conduct a series of research tasks including but not limited to analyzing the basic components of historical artifacts (pigments, dyes, binding media, mordants, fiber, etc.), long-term artifact monitoring, assessment during conservation treatments, component mapping, and revealing of hidden patterns not discernible to the human eye. However, VNIR and SWIR hyperspectral images of the same scene are often analyzed independently because of the intrinsic differences present at the image sensor level, which makes data fusion a challenging problem. The first goal of this thesis is to develop an appropriate imaging setup for the simultaneous acquisition of VNIR-SWIR hyperspectral data with the twofold aim of obtaining high-quality data while preserving the integrity of the studied artifact. Secondly, the spatio-spectral alignment of the two hyperspectral images is addressed. Since the problem of spatial image registration has been extensively studied in the literature, we focus on the factors that may influence its performance in this context. For the spectral alignment, we propose a novel splicing correction that smoothly connects hyperspectral images with adjacent or overlapping spectral ranges. We then explore the application of image sharpening (e.g. pansharpening) techniques originally developed for remote sensing on proximally-sensed historical artifacts, proposing a discussion focused on the negative impact that some algorithms have on subsequent analysis processes such as the classification of spectral signals. Finally, from the hypothesis of having to capture complex artifacts such as glossy paintings, we address the integration of polarimetric imaging in the fusion pipeline, developing an acquisition paradigm for the acquisition of VNIR-SWIR spectral Stokes images that allows the study of spectro-polarimetric quantities such as the correlation between the reflectance and the linear degree of polarization. In the initial hypothesis, the joint analysis of VNIR and SWIR Reflectance Imaging Spectroscopy data can be thought of as more powerful than the individual analyses conducted separately. However, this hypothesis could not be fully verified within this thesis, and some open questions are left for future explorations regarding its validity. |
BibTeX:
@incollection{Grillinithesis2023, author = {Grillini, Federico}, editor = {NTNU}, title = {Reflectance imaging spectroscopy: Fusion of VNIR and SWIR for Cultural Heritage analysis}, publisher = {PhD}, year = {2023}, url = {http://jbthomas.org/SupervisedPhD/2023FedericoGrillini.pdf} } |
Russo S (2022), "Analysis and assessment of degradation of polychrome metal artworks" PhD. |
Abstract: Manuscript to appear |
BibTeX:
@incollection{Russothesis2022, author = {Russo, Silvia}, editor = {University of Neuchatel, Haute Ecole Arc CR}, title = {Analysis and assessment of degradation of polychrome metal artworks}, publisher = {PhD}, year = {2022} } |
Gigilashvili D (2021), "On the Appearance of Translucent Objects: Perception and Assessment by Human Observers" PhD. |
Abstract: Appearance characterizes visual features of objects and materials. It is a multiplex psychovisual phenomenon that is usually broken into several appearance attributes for simplification of its measurement and communication, and for studying its nature. Color, texture, gloss, and translucency are considered the major appearance attributes. Significant research work has been done in metrology for accurate instrumental measurement of optical properties of materials, and considerable advances have been made in computer graphics, permitting the generation of highly photorealistic visual stimuli. Nevertheless, the knowledge remains limited on how humans perceive appearance, how we behave to assess appearance, what factors impact our perception, how different attributes interact with each other, and all in all how optical properties relate with their perceptual counterparts.
In this thesis, we explore various aspects of appearance perception with a focus on the appearance of translucent objects. For this purpose, we conducted a series of social and psychophysical experiments with real and synthetic visual stimuli. Elucidating appearance perception of translucent objects has implications for industrial, academic and artistic applications alike. In the initial stage of the study, we organized a social experiment in order to collect qualitative observations on the process of appearance assessment, construct a qualitative model of material appearance and generate relevant research hypotheses. The hypotheses have been analyzed in context of the state-of-the-art. Afterwards, we tested the most interesting hypotheses quantitatively, in order to assess their generalization prospects. The experimental results have provided indications in support of the hypotheses. We have observed that translucency of an object impacts perception of glossiness, while detection of translucency difference depends on geometric thickness of the objects and optical thickness of the materials they are made of. Additionally, we examined a potential role of several cues in translucency perception that are present in the image detected by either a camera or a human observer. We found that blurriness of the image and the presence of caustics can impact apparent translucency. Finally, we conducted a comprehensive survey on translucency perception, advancing the state-of-the-art with our findings, and outlining unanswered questions for future research. |
BibTeX:
@incollection{Gigilashvilithesis2021, author = {Gigilashvili, Davit}, editor = {NTNU}, title = {On the Appearance of Translucent Objects: Perception and Assessment by Human Observers}, publisher = {PhD}, year = {2021}, url = {http://jbthomas.org/SupervisedPhD/2021DavitGigilashvili.pdf} } |
Khan HA (2018), "Multispectral constancy for illuminant invariant representation of multispectral images" PhD. |
Abstract: A conventional color imaging system provides high resolution spatial information and low resolution spectral data. In contrast, a multispectral imaging system is able to provide both the spectral and spatial information of a scene in high resolution. A multispectral imaging system is complex and it is not easy to use it as a hand held device for acquisition of data in uncontrolled conditions. The use of multispectral imaging for computer vision applications has started recently but is not very efficient due to these limitations. Therefore, most of the computer vision systems still rely on traditional color imaging and the potential of multispectral imaging for these applications has yet to be explored.
With the advancement in sensor technology, hand held multispectral imaging systems are coming in market. One such example is the snapshot multispectral filter array camera. So far, data acquisition from multispectral imaging systems require specific imaging conditions and their use is limited to a few applications including remote sensing and indoor systems. Knowledge of scene illumination during multispectral image acquisition is one of the important conditions. In color imaging, computational color constancy deals with this condition while the lack of such a framework for multispectral imaging is one of the major limitation in enabling the use of multispectral cameras in uncontrolled imaging environments. In this work, we extend some methods of computational color imaging and apply them to the multispectral imaging systems. A major advantage of color imaging is the ability of providing consistent color of objects and surfaces across varying imaging conditions. In this work, we extend the concept of color constancy and white balancing from color to multispectral images, and introduce the term multispectral constancy. The validity of proposed framework for consistent representation of multispectral images is demonstrated through spectral reconstruction of material surfaces from the acquired images. We have also presented a new hyperspectral reflectance images dataset in this work. The framework of multispectral constancy will make it one step closer for the use of multispectral imaging in computer vision applications, where the spectral information, as well as the spatial information of a surface will be able to provide distinctive useful features for material identification and classification tasks. |
BibTeX:
@incollection{Khanthesis2018, author = {Khan, Haris Ahmad}, editor = {Université de Bourgogne and NTNU}, title = {Multispectral constancy for illuminant invariant representation of multispectral images}, publisher = {PhD}, year = {2018}, url = {http://jbthomas.org/SupervisedPhD/2018HarisAhmadKhan.pdf} } |
Wang X (2016), "Filter array based spectral imaging : Demosaicking and design considerations" PhD, Doctoral thesis at NTNU;2016:251. |
Abstract: Spectral imaging apparatus in current use are often cumbersome, costly and slow in operation, which becomes a major obstacle to extensive use of spectral imaging in several application areas. In recent years, the technical and commercial success of color filter array (CFA) based imaging systems has motivated researchers to generalise and expand the concept of CFA to achieve efficient spectral imaging through the use of the spectral filter array (SFA). This dissertation expounds the research into the filter array approach to spectral imaging based on a simulation framework, from the development of demosaicking methods to the design and evaluation at the system level. The dissertation first presents the development of the field of spectral imaging from its roots in spectroscopy and imaging, and explores the state-of-the-art solutions based on SFA from design to realisation. It then proposes a simulation framework composed of the major parts in a typical imaging pipeline. Based on this, the influence of chromatic aberration on CFA demosaicking and the impact of filter bandwidth on spectral reconstruction were evaluated. The results helped to better understand the delicate interactions between the components in the pipeline and verify the validity of the simulation framework. On the basis of the framework, three novel SFA demosaicking methods were developed and evaluated. The methods differ fundamentally and thus featuring distinct properties, as confirmed by the experimental results. The key to understanding the differences lies in the way demosaicking methods deal with the spatial and spectral correlation between pixels in a mosaicked image. An evaluation of the colorimetric performance shows that a properly designed SFA-based imaging system may also be useful for colour image acquisition. Lastly, performance of the proposed and conventional demosaicking methods were scrutinised, given the characteristics and parameters of a real-world SFA sensor design. We conclude that, for a successful SFA-based spectral imaging system design, it is important to consider carefully the joint influence of all the involved modules as well as the requirements and constraints of applications. And we hope that the use of SFA based spectral imaging is expected to be wider in the foreseeable future in the light of technological advances and market demand. |
BibTeX:
@incollection{Wangthesis2016, author = {Wang, Xingbo}, editor = {NTNU and Université de Bourgogne}, title = {Filter array based spectral imaging : Demosaicking and design considerations}, publisher = {PhD, Doctoral thesis at NTNU;2016:251}, year = {2016}, url = {http://jbthomas.org/SupervisedPhD/2016XingboWangThesis.pdf} } |
ElKhoury J (2016), "Model and quality assessment of single image dehazing" PhD. |
Abstract: This thesis is mainly related to color imaging science, involving many disciplines, such as color image enhancement, image formation, color reproduction, optical physics, radiometry, colorimetry,
image quality and psychophysics. Dehazing aims at recovering the image information degraded by light scattering, e.g. bad weather. This process is an ill-posed and a challenging problem. Although a variety of approaches have been proposed, there is still room for further improvement and standardization. In this work, we investigate the limitations of haze model in terms of accuracy of color image recovery. We address also the link between the visibility deterioration and the spectral content of the images. Moreover, with the multiple existing dehazing algorithms, it is mandatory to evaluate and compare their performance. Indeed, only limited investigations have been performed on the quality of dehazing in particular on the fidelity of the recovered material. Thus, we propose to evaluate the quality of dehazed images. To this aim, a color and a multispectral hazy image databases have been conceived. These databases represent with their ground truth clear image, an adequate tool to deal with dehazing quality in terms of objective and subjective assessment. |
BibTeX:
@incollection{ElKhourythesis2016, author = {ElKhoury, Jessica}, editor = {Université de Bourgogne}, title = {Model and quality assessment of single image dehazing}, publisher = {PhD}, year = {2016}, url = {http://jbthomas.org/SupervisedPhD/2016JessicaElKhouryThesis.pdf} } |
Zhao P (2015), "Colorimetric characterization of displays and multi-display systems" PhD, Doctoral Dissertations at Gjovik University College;3-2015. |
Abstract: This thesis presents the outcomes of research carried out by the PhD candidate Ping Zhao during 2012 to 2015 in Gjøvik University College. The underlying research was a part of the HyPerCept project, in the program of Strategic Projects for University Colleges, which was funded by The Research Council of Norway. The research was engaged under the supervision of Professor Jon Yngve Hardeberg and co-supervision of Associate Professor Marius Pedersen, from The Norwegian Colour and Visual Computing Laboratory, in the Faculty of Computer Science and Media Technology of Gjøvik University College; as well as the co-supervision of Associate Professor Jean-Baptiste Thomas, from The Laboratoire Electronique, Informatique et Image, in the Faculty of Computer Science of Universit´e de Bourgogne. The main goal of this research was to develop a fast and an inexpensive camera based display image quality assessment framework. Due to the limited time frame, we decided to focus only on projection displays with static images displayed on them. However, the proposed methods were not limited to projection displays, and they were expected to work with other types of displays, such as desktop monitors, laptop screens, smart phone screens, etc., with limited modifications. The primary contributions from this research can be summarized as follows:
1. We proposed a camera based display image quality assessment framework, which was originally designed for projection displays but it can be used for other types of displays with limited modifications. 2. We proposed a method to calibrate the camera in order to eliminate unwanted vignetting artifact, which is mainly introduced by the camera lens. 3. We proposed a method to optimize the camera’s exposure with respect to the measured luminance of incident light, so that after the calibration all camera sensors share a common linear response region. 4. We proposed a marker-less and view-independent method to register one captured image with its original at a sub-pixel level, so that we can incorporate existing full reference image quality metrics without modifying them. 5. We identified spatial uniformity, contrast and sharpness as the most important image quality attributes for projection displays, and we used the proposed framework to evaluate the prediction performance of the state-of-the-art image quality metrics regarding these attributes. The proposed image quality assessment framework is the core contribution of this research. Comparing to conventional image quality assessment approaches, which were largely based on the measurements of colorimeter or spectroradiometer, using camera as the acquisition device has the advantages of quickly recording all displayed pixels in one shot, relatively inexpensive to purchase the instrument. Therefore, the consumption of time and resources for image quality assessment can be largely reduced. We proposed a method to calibrate the camera in order to eliminate unwanted vignetting artifact primarily introduced by the camera lens. We used a hazy sky as a closely uniform light source, and the vignetting mask was generated with respect to the median sensor responses over i only a few rotated shots of the same spot on the sky. We also proposed a method to quickly determine whether all camera sensors were sharing a common linear response region. In order to incorporate existing full reference image quality metrics without modifying them, an accurate registration of pairs of pixels between one captured image and its original is required. We proposed a marker-less and view-independent image registration method to solve this problem. The experimental results proved that the proposed method worked well in the viewing conditions with a low ambient light. We further identified spatial uniformity, contrast and sharpness as the most important image quality attributes for projection displays. Subsequently, we used the developed framework to objectively evaluate the prediction performance of the state-of-art image quality metrics regarding these attributes in a robust manner. In this process, the metrics were benchmarked with respect to the correlations between the prediction results and the perceptual ratings collected from subjective experiments. The analysis of the experimental results indicated that our proposed methods were effective and efficient. Subjective experiment is an essential component for image quality assessment; however it can be time and resource consuming, especially in the cases that additional image distortion levels are required to extend the existing subjective experimental results. For this reason, we investigated the possibility of extending subjective experiments with baseline adjustment method, and we found that the method could work well if appropriate strategies were applied. The underlying strategies referred to the best distortion levels to be included in the baseline, as well as the number of them. |
BibTeX:
@incollection{Zhaothesis2015, author = {Zhao, Ping}, editor = {NTNU}, title = {Colorimetric characterization of displays and multi-display systems}, publisher = {PhD, Doctoral Dissertations at Gjovik University College;3-2015}, year = {2015}, url = {http://jbthomas.org/SupervisedPhD/2015PingZhaoThesis.pdf} } |