The Limits of Automation Expertise in Observatories

Observatory automation is often presented as a technical problem. If the right software exists, and if the right interfaces are in place, operations can be streamlined, scaled, and made more reliable.

In practice, automation sits at the intersection of two very different domains.

On one side, there is the knowledge of observing itself: how astronomical targets behave over time, how sidereal time constrains what can be observed, how observing a quasar differs from observing a planet, how choices between imaging, spectroscopy, or astrometry depend on scientific goals. This knowledge is tightly connected to night operations. It includes how instruments behave, how weather affects decisions, and how observers adapt in real conditions. Much of it is tacit, built through experience rather than documentation.

On the other side, there is software: system architecture, distributed processes, fault tolerance, interfaces between components, and the operational discipline required to keep systems stable over long periods.

Both domains are deep. Both require years of experience. And they evolve independently.

The observing console of the AAT, in Siding Springs, Australia. What a mix of old and new stuff…

Where the Gap Appears

Automation requires these two domains to meet.

A system that is technically well designed but disconnected from observing practice quickly becomes impractical. It may assume ideal conditions, overlook common edge cases, or fail to reflect how decisions are actually made during a night of observations.

Conversely, a system built with strong observational intuition but limited software structure often accumulates complexity. Scripts multiply, implicit dependencies appear, and scaling becomes difficult. What works for a single setup or a small team becomes fragile when extended to multiple users, instruments, or sites.

In many observatories, these two perspectives are distributed across different people or teams. Astronomers, operators, and instrument specialists hold one side of the knowledge. Software engineers hold the other. Coordination between them exists, but it is rarely continuous or deeply integrated.
As a result, automation efforts often converge toward local optimizations. A script solves a specific problem. A tool improves a specific workflow. But the system as a whole remains difficult to reason about.

Scaling Beyond Individuals

The difficulty is not simply a lack of expertise. It is that the required expertise does not naturally concentrate.

Very few individuals are deeply familiar with both the realities of observing and the design of robust software systems. Even within teams, maintaining that dual perspective over time is challenging.

This becomes more visible as observatories scale.

More users.
More instruments.
More remote operations.
More expectations for reliability.

Automation is no longer just about reducing manual work. It becomes part of the infrastructure itself. And the cost of misalignment between observing practice and system design grows with scale.

The question is not how to automate individual tasks.

It is how to design systems that remain aligned with the way observations are actually conducted, while being robust enough to support them at scale.

No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *