SPARCS, is a cost-effective and easily accessible laptop based online application for assessing both central as well as peripheral CS, helping individuals to actively participate in monitoring their eye health at home using just their laptops and an internet connection. 6
Another such test is ClinicCSF wherein Contrast Sensitivity Function (CSF) is measured using iPad. A recent study compared ClinicCSF against the Functional Acuity Contrast Test (FACT) and demonstrated that there was no significant differences between the two tests suggesting applications on the iPads and smartphones can provide accurate measurements of CS, comparable to the established psychophysical tests like FACT.8
In another study, a test conducted on a tablet-based platform proved to be a valid method for assessing distance and near visual acuity, as well as CS. The results of this study demonstrated that the tablet-based test yielded comparable outcomes to the gold-standard clinical tests, which included the ETDRS distance acuity, Pelli-Robson CS, and MNRead near acuity tests.9
As technology continues to evolve, it is imperative to validate the use of visual function evaluation applications on different devices to assess any potential impact of new technology. By prioritizing validated apps, healthcare professionals can ensure reliable and comparable results when utilizing these applications in clinical settings. The validation process should adapt to changes in technology to maintain the accuracy and effectiveness of visual function assessment.
Our study assessed the performance of SPARCS score on two different laptops, specifically the MacBook Air and Microsoft Surface Pro 7. Aim was to identify any significant variations in the test results when conducted on different laptops with different display characteristics. These laptops were chosen to compare the application on the two most widely used operating systems (OS), iOS and Windows. With desktop operating system worldwide market share as of May 2024 being 73.9% for Windows and 14.91% for MacOS.10
We found good repeatability and reliability of the SPARCS scores when the test was performed on the same laptop. The BA plot showed that the mean difference between measurements taken on both devices was modest; however, the broader limits of agreement indicate that the differences between the two could be substantial for some measurements. This variability is supported by the correlation matrix, which showed that while there is a positive relationship between equivalent measurements on the two devices, the correlation was not perfect. This deviation highlights the potential variability in measurements and suggested that the devices do not consistently yield identical results.
In the clinical context of CS testing using SPARCS, even minor variations in measurements can carry significant implications. This is because CS testing is highly sensitive and designed to detect subtle changes in a patient’s visual capabilities. As such, any external sources of variability, including those introduced by the choice of device or screen technology, can confound the results.
Given these findings, it is advisable to stress the importance of maintaining device consistency during SPARCS testing in clinical practice. Switching between the Macbook and the Surface Pro (or any two different laptops), while seemingly minor, could introduce discrepancies that are not reflective of genuine changes in an individual's CS. Instead, such discrepancies may arise from the inherent variability between the two devices.
Various display properties that could have led to the differences include pixel density, screen resolution, luminance, color calibration, refresh rate and laptop display.
Increased level of detail with higher pixel density (pixel per inch; PPI) can enhance the CS because of better differentiation of fine contrast differences. LCD screens typically use subpixels (red, green, and blue) to create individual pixels. Thus subpixel arrangement can also affect CS.
MacBook Air's Retina display technology has a 13-inch display screen with 2560 x 1600 resolution and high PPI (approximately 227 PPI) with high pixel density and color accuracy resulting in better clarity and visual performance during CS testing. On the other hand, the Microsoft Surface Pro's PixelSense display technology on a 12.3-inch with 2736 x 1824 resolution had a higher PPI (approximately 267 PPI) and offered even higher resolution and color reproduction. Despite comparable high pixel density with good resolution both the laptop screens provided different outcomes. MacBook Air's Retina display technology had higher SPARCS scores than Microsoft Surface Pro.
Displays with higher resolutions, such as QHD (2560x1440) or 4K UHD (3840x2160), typically provide sharper and more detailed images. This increased level of detail can enhance CS by making it easier to discern subtle differences in contrast.In a study 15 iPad mini Retina display devices were evaluated for visual function assessment and showed that the tablets required approximately 13 minutes to achieve stable luminance after being powered on, while the chromaticity remained constant throughout. Temperature fluctuations had a minimal impact of 1% on luminance, but had no effect on chromaticity. All 15 tablets exhibited gamma functions that closely approximated the standard gamma value of 2.20, and their color gamut sizes were similar, with only slight differences observed in the blue primary. Considering the comparable physical characteristics of these devices, they can be considered suitable for use as visual stimulus displays.11 In terms of CS, these findings suggest that luminance variations across the screen, whether due to viewing angle, battery level, or temperature, are likely to have some impact. However, the study does not provide specific information on how these variations would affect CS. Further studies would be needed to determine the exact influence of these factors on contrast sensitivity performance. Additionally, the study was done on tablets, so the direct implications for a given laptop may vary depending on its specific characteristics and display technology. Therefore, it is crucial to calibrate and evaluate the display performance of individual devices, especially if they are not of the same make and model.
Color calibration refers to the process of adjusting the display's color reproduction for accurate and consistent color representation. If one device is calibrated to display certain shades of grey as darker or lighter than intended, it can affect the perception of contrast between different elements in the test patterns. The MacBook Air (M1, 2020) is equipped with True Tone technology, enabling automatic adjustment of the display's color temperature according to the ambient lighting conditions. Additionally, it features a wide color gamut, allowing for the display of a broader spectrum of colors. On the other hand, the Microsoft Surface Pro 7 is designed with a high contrast ratio to enhance the differentiation between dark and light areas on the screen. It also incorporates an ambient light sensor that automatically adapts the display brightness based on the surrounding lighting conditions.
These optimizations may impact the contrast representation and ultimately affect the SPARCS score obtained during testing. To ensure reliable and consistent CS testing, it is important to calibrate both devices appropriately with a color calibration device, such as a colorimeter or spectrophotometer, which is designed to measure and adjust color accuracy. This helps minimize any potential discrepancies in color representation and allows for more reliable assessment of CS across different devices.
A higher refresh rate (60 Hz or above) can minimize screen flickering, ensuring stable and smooth test stimuli. MacBook Air and Microsoft Surface Pro 7 have a standard refresh rate of 60 Hz. This means that the display refreshes the image 60 times per second, providing a smooth visual experience during CS testing. Researchers in a study found that higher refresh rates of video displays positively impacted reading speed and reduced disruptions in eye movements during reading. The study also highlighted the importance of oculomotor adaptation, indicating that participants could adjust their eye movements to match the characteristics of the video display, leading to improved reading outcomes.12
The MacBook Air features a 16:10 aspect ratio (divine proportion or gold standard; where width is roughly 1.5 times the height), providing a slightly taller display, while the Microsoft Surface Pro features a 3:2 aspect ratio, providing a taller display similar to the MacBook Air but with a slightly different ratio. While aspect ratio itself does not directly affect CS, it can impact the overall visual experience and potentially influence the perception of CS during SPARCS testing. Image processing algorithms or display enhancements, implemented by the manufacturers can also influence the visual performance and CS on the screen. In the clinical context of CS testing using SPARCS, even minor variations in measurements can carry significant implications.
SPARCS test on a single laptop with the same browser can be used reliably and consistently to know and compare the CS between individuals but to ensure comparable results using SPARCS on different laptops, it is important to consider the specific display properties and calibrating them to establish accurate measurements across different laptop models. This understanding will help us take a step ahead in developing standardized and reliable methods for CS testing using modern technologies, ensuring accurate and accessible screening for ophthalmic conditions.