How Do Computers Keep Track of Time – Cracking the Secrets

Rahul Srivastav No comment

Hello! My name’s Rahul and today we are going to discuss something quite interesting. Our today’s topic of discussion is “how do our computers or laptops keep track of date and time.

So, I just recently came across the first laptop I ever owned and wondered if it’d still work today.

I took it out of the storage box, plugged it into one of my power outlets, turned the power button on, and interestingly, it still works! However, there was one problem – when I switched it on, the device showed me the date – 1st of January, 1970.

The machine was still running perfectly fine but there was just a small issue, it was running 48 years behind. That got me thinking about how computers track time considering that they do not have any mechanical mechanism inside them like normal watches.

How Computers Keep Track of Time

featured image of laptop showing time

My 1st laptop is working absolutely fine but it’s just showing me the wrong date and time.

This, then led to me asking myself several questions.

  1. How do computers keep track of time, even when turned off?
  2. How come my old laptop only lost track of time after many years of being in storage and not every moment I turned it off?

Interestingly, the very first successful computers such as the original Apple II never even bothered with keeping time.

Coming back to the topic, if applications needed to what time it was, they had to query the user. Once users typed the current time, the computer would then keep track by interrupting its processor on steady intervals.

Wow, that was a bit complicated. Don’t worry, I have broken down this topic into multiple sub-sections and have tried to explain the whole process in a detailed yet easy-to-understand to understand manner.

Breakdown of the Process

Computer processors run at a specific frequency known as clock speed. Clock speed is measured in Hertz (which is a unit of frequency) and is used to synchronize the various components found inside the central processing unit, so they work together nicely.

1 Hertz means one cycle per second.

For instance, let us say that a single instruction on the processor requires a single clock cycle to complete. That means a single core processor with a speed of 1Hertz can execute one instruction per second.

That is really slow if you take into account that processing a simple math calculation like two times two will require up to 7 cycles – in our case, it’d take the processor up to 7 seconds to calculate the answer.

Fortunately, modern processors, such as those on our laptops and computers, are clocked at multiple Gigahertz; a 1 Gigahertz CPU can, therefore, execute 1,000,000,000 operations per second.

Quick Fun Fact: You can’t use clock speeds to compare the different processors available.

CPU with clock speeds of 2 Gigahertz isn’t necessarily twice as fast as those clocked at 1 Gigahertz. It all depends on the processor’s architecture, and the number of cycles it requires to process simple operations.

There’s a possibility that a “faster” processor will require twice as many cycles for it complete a simple multiplication equation, making it as fast as the “slower” processor.

Back to our point on keeping time.

Let us say we have a processor that is clocked at 2 Gigahertz – that means that it can execute approximately 2,000,000,000 instructions per second. To keep track of time, what we need to do is to count the number of cycles that have passed. If we count 1,000,000,000 cycles on this particular processor, that means that about half a second has passed.

However, it is impossible to count every clock cycle.

If we used every processor cycle to run a time-keeping program, there’d be no room left for other applications to run. To make the process more efficient, CPUs use something known as a clock oscillator.

Clock oscillators take a processor’s clock speed and slow it down to a frequency that’s more manageable like 100 Hertz, which translates to 100 times a second. This much slower signal is then used to interrupt the processor allowing it to run the program that tracks time. And while this setup seems to have solved the problem, the thing is, it creates a host of new problems.

The first issue is with accuracy. Our clock’s accuracy is dependent on how accurate the clock oscillator is. For instance, let’s say the interrupt is fired every ten milliseconds or 100 times per second, that means that accuracy will never be smaller than ten milliseconds.

Secondly, we have not solved the power loss problem.

If we turn off the computer, the processor also turns off, which means that it’s no longer keeping track time. To solve this problem, computers have something known as Real Time Clock or RTC.

Real Time Clock is a tiny chip which keeps track of time by passing a small amount of electricity through a small quartz crystal which makes it vibrate at a frequency of 32,768Hz. Counting the vibrations will allow you to keep track of time.

Quartz is generally used in watches since it is cheap, relatively accurate, and uses very little power. Just like with a watch, the Real Time Clock is powered by a small battery that keeps it running even when you turn off your computer.

The reason my old laptop lost all track of time was as a result of the Real Time Clock battery dying. Something that’s to be expected after years of storage. When its system booted up again, it could not read the time from the Real Time Clock; as a result, it fell back to the default value – in my old PowerBook, it was 1970. The Real Time Clock in modern devices is pretty small; actually, it could be very hard to spot it in a phone or laptop.

But they were humongous in the past.

As I mentioned earlier, the Apple II couldn’t keep track of time; however, there were aftermarket Real Time Clock modules such as Thunderclock available. Just look at how massive it was! The RTC was so huge, it had to be powered by two humongous batteries! Back to our discussion before I get distracted again. Thanks to Real Time Clock we now able to turn off out computers without fearing that it will not remember the time when powered back on.

However, RTCs aren’t 100% accurate.

In fact, considering the model you have, your RTC could lose or gain up to 15-seconds every 30-days. That means that after a while, your clock could run behind or ahead.

This occurrence is quite common on many home appliances and watches, that is why you have to correct them now and then. This phenomenon is known as “ clock drift” and happens because a majority of timepieces have limited precision.

The good thing is that our computers can compensate for this. Computers connected to the internet can use the Network Time Protocol or NTP to synchronize their clock with super accurate sources like atomic clocks.

What is Network Time Protocol?

NTP is even designed to compensate internet slowdown or latency. As a matter of fact, over the public internet, it’s accurate within several tens of milliseconds while local networks it can be reduced to no more than 1ms.

This far, we have discussed the various ways computers keep track of time.

However, how do they work together?

When you start up your computer or laptop, the OS fetches the time from the Real Time Clock, which has been tracking time while the machine was powered off. Once it has received the time, the OS continues (separately) keeping track of time by interrupting the CPUs processor and periodically syncs its time with Network Time Protocol to compensate any clock drifts.

When you turn your computer off, the Real Time Clock continues keeping track of time. We now understand how computers keep track of time; however, there is one question that we’ve still not answered – can they keep track of time forever? Well, as you can already tell, there are several limitations.

In the late 90s, people thought that jumping into the new millennium would lead to many computer programs crashing since they condensed the first 2-digits of the year. For instance, 1999 was represented as 99; as such, people believed that as the new year rolled in, most programs would consider the year to be 00 or 1900.

This was what was known as the millennium bug or the Y2K, and if left unpatched, it would render many computer systems useless. However, it did not end up being such a huge problem since most software companies created fixes for it. The most recent bug is one that affects Linux systems and is known as The Year 2038 Problem. Linux represents time by counting the number of seconds that have passed since the 1st of January, 1970 and store in within a signed integer.

Here is a good example of such counters.

The first bit is used to tell if the number is positive or negative. Here the zero means that it’s positive. However, on the 19th of January 2038, the counter will run out of space and flip this zero into a one, turning the counter into a negative number. A system affected by this bug will then think the year is 1901.

Luckily though, as consumers, we don’t have to worry because we’re running 64 bit systems, which can keep track of time, pretty much forever.

It is, however, a more serious problem for embedded devices or old legacy systems that can’t easily be upgraded. But we still have some time to come up with solutions! Time to wrap up then! As you can see, keeping track of time is not as easy as you might think. And now you know how a computer does it!

Speak your mind!

Your email address will not be published. Required fields are marked *