TychoCam

The Technology

Rather than relying on heuristic thresholds, the system uses a physically informed, parameterized clear-sky model combined with image differencing to identify clouds under daylight conditions. This approach balances physical realism with computational efficiency, enabling continuous, unattended operation in support of ALPACA observing conditions reporting.

 Measuring the Alpaca Required Observing Conditions

The TychoCam leverages several hardware technologies and software techniques to determine the ASCOM Observing Conditions and to present these in a meaningful and useful manner both on the website and on the APIs The API is in compliance with the ASCOM standards.

Hardware  Gauges and Measurements

Hardware technologies to precisely measure conditions at a geographic location are attached to the Tychocam. The TychoCam integrates the following hardware (with a considerable amount of supporting custom software):

Ambient Temperature Infrared Sky Temperature Wind Speed and Direction
Relative Humidity Rain Intensity Barometric Pressure
Sky Brightness LUX Sky Quality Bortle
Wind Gust 

Calculated Observed Conditions

StarFWHM Cloudiness

These conditions are created from comparing the image to the Bright Star Catalog. This resultant measurements are provided in the required format and measures (LUX, Bortle, arc-secs, metric, etc). This data is available on the user web site as well as the API's.

As backup to the local conditions reporting hardware, the TychoCam will query the local National Weather Service to provide for the missing data.  

Image Processing - Astronomical Night

An image is taken every 30 ms to 60 seconds with the 1816x1816 color astronomy camera and  fish-eye lens. The image is processed as follows (simple explanation):

  • Transform the Bright Star Catalog according to location, magnitude and  time, and other parameters (altitude...).
  • Transform the image per a transform matrix (rotation, fisheye mapping)
  • Find likely stars in the image
  • Flatten the fisheye image to match the star catalog
  • Create and apply moon and user provided masking
  • Calculate average starFWHM from a selection of stars
  • Plot the image with likely, found, and not found stars
  • Calculate cloudiness by comparing found/not found catalog stars
  • Plot constellation or range rings
  • Add optional data to the image legend for convenience; Cloudcover, temperatures, dewpoint, skytemperature...

Image Processing — Daytime Cloud Detection

Daytime cloud detection presents a fundamentally different problem from nighttime astronomical imaging. The observed sky brightness is dominated by solar illumination, atmospheric scattering, and wavelength-dependent attenuation, requiring a model-based approach rather than direct thresholding.

To address this, the system computes an idealized clear-sky radiance model for the current observing geometry and compares it to the observed camera image in real time.

Clear-Sky Reference Modeling

The clear-sky model estimates the expected sky radiance as a function of:

  • Solar zenith angle
  • Atmospheric scattering and attenuation
  • Viewing direction (pixel-wise line of sight)
  • Instrument geometry and lens projection

The model accounts for the fact that shorter wavelengths are scattered more strongly, causing the sky to redden and dim as optical depth increases. This behavior is approximated using parameterized Rayleigh and aerosol (Mie) scattering components suitable for real-time computation.

Two illumination regimes are used:

  • Day Mode (solar zenith < ~70°):
    A simplified scattering model optimized for high Sun elevations, where multiple scattering and low-angle extinction effects are minimal.
  • Low-Sun / Night Transition Mode (solar zenith > ~70°):
    Additional attenuation and base-sky terms are applied to account for increased optical path length, enhanced scattering, and reduced direct solar contribution.

This separation is an engineering optimization that preserves accuracy while maintaining real-time performance.

Image Differencing and Cloud Isolation

For each frame:

  • A clear-sky radiance image is synthesized for the current Sun position and camera geometry.
  • The observed image is geometrically corrected using a transformation matrix to map fisheye lens coordinates into true sky coordinates.
  • The modeled clear-sky image is subtracted from the observed image to isolate excess radiance attributable to clouds.
  • The Sun and Moon are masked to eliminate direct illumination artifacts.
  • Fixed obstructions (trees, buildings, antennas, domes) are excluded via site-specific masks.

The remaining high-intensity regions are classified as cloud pixels.

Cloud Fraction Estimation

Cloudiness is reported as a fractional sky coverage, computed as the percentage of sky pixels exceeding the modeled clear-sky radiance threshold after masking and correction.

This approach provides:

  • Quantitative cloud coverage
  • Independence from absolute brightness
  • Robust performance under varying solar elevation and atmospheric conditions

Optional Overlays and Diagnostics

For diagnostic and visualization purposes, optional overlays may be rendered, including:

  • Solar and lunar positions
  • Sky coordinate grids
  • Constellation outlines (for nighttime operation)
  • Environmental data values

These overlays do not affect cloud detection calculations.