The chip-scale atomic clock (CSAC) is the world’s lowest-power, lowest-profile atomic clock. Thousands of units are deployed every year. But how does it perform in a rapidly changing thermal environment?
For mobile applications, frequency and timing stability in variable temperature environments is critical. Choosing the best oscillator for the job can be confusing when comparing product specifications from different manufacturers, or even different product lines within the same company. The reason is that commercial oscillators are not held to any standard test, so the temperature profile (temperature range,
ramp rate, and number of cycles) used can vary from product to product.
Atomic clocks and oscillators are generally considered to have superior temperature stability over their crystal-based counterparts. This is mainly due to the sealed (Cesium or Rubidium) gas cell’s isolation from the outside environment. There are other factors that also contribute to an atomic oscillators’ temperature resistance, as discussed later.
This series of articles will compare CSAC performance to that of an OCXO with a similar stability-versus temperature specification. The paper will also discuss how time, temperature, and initial frequency errors can effect holdover timing error. For the purposes of this paper, holdover means the period of time when an oscillator is allowed to free-run. The frequency and timing error that accumulates during holdover is relative to a perfect timing reference.
The results of this analysis will show that a CSAC and an OCXO with similar specifications do not necessarily have the same performance—the CSAC is superior when subjected to a rapid ambient temperature change.
In the next post of this series, I’ll write about the “effects on timing error: elapsed time.”
To read the complete white paper, go here now.
Leave a Reply
You must be logged in to post a comment.