Verifiable Delay Functions and Confidential Computing
In this article, we explore a new approach to Verifiable Delay Functions (VDFs) and how they can work together with Confidential Computing.
Bringing time to secure computation systems, such as Confidential Computing, has long been a desired yet unattainable feature. The straightforward approach of standard techniques fails due to the Confidential Computing threat model. You cannot trust the operating system or any hardware besides the CPU. In this article, we propose a solution to get proof of the amount of elapsed time in the world into the Confidential Computing system. This enables the Confidential Computing system to verify that at least the specified time has passed from the checkpoint moment. Not yet achieving clock time, our solution brings us one step closer to the end goal. Our solution relies on a concept of a Verifiable Delay Function (VDF) and has no security assumptions. We propose a new way to build VDF functions using the Confidential Computing system. The Confidential Computing VDFs do not rely on complex mathematics and would be more accurate, energy efficient and have lower verification time than the conventional VDFs.
Background
Time in Computers
Time Synchronisation and Maintenance are solved problems in regular computational settings. Computers today can maintain a Wall Clock time, an accurate representation of global time.
First, to establish the correct Wall Clock time, computers use a Network Time Protocol (NTP), which uses a hierarchy of time sources based on the amount of handshake distance from the atomic clock. This protocol first helps a computer identify a trusted source of true time in a network and then, using probabilistic estimations over the connection time between the source of true time and the computer, allows the computer to recalibrate itself to have an accurate Wall Clock time for a given point.
There are several techniques to maintain time. A naïve approach makes use of counting the number of CPU cycles. It estimates the time passed by multiplying the number of CPU cycles by an average cycle duration time. Yet, this technique implies that the moment the system is turned off, the CPU is no longer running, and we no longer know how much time has passed, making us recalibrate on start-up using NTP. Additionally, the technique uses the time duration of an average CPU cycle, which could also fluctuate because of multiple factors, such as spikes of current in the CPU circuits. This makes it impossible to maintain the exact time.
A better alternative has been devised, using a special chip to store time. The chip has a quartz crystal with a battery, much like current wrist quartz watches. The quartz crystal allows the clock to maintain a real sense of time and helps the battery keep the time counting even after the system power has been shut off.
However, both time establishment and maintenance relied heavily on trust in other parties and could be fully controlled by the computer host.
Confidential Computing
Confidential Computing is an idea of a machine in which operations and data are hidden and cannot be manipulated by anyone, even the person who runs the machine. This is achieved with a specially designed CPU, which can execute arbitrary logic, making it on par with regular CPUs in its capabilities. The Confidential Computing model supports interactions with external parties (clients) by sending and receiving encrypted information. The key here is that the external parties can always verify the system they are communicating with. This is achieved using Remote Attestation, which contains proof that the CPU is genuine and not compromised, as well as what application is running inside the Confidential Computing unit. With that, developers can build confidential, secure applications that send and receive data from clients and do arbitrary operations on the data.
Today, there are multiple machines which work like this. For instance, Intel SGX allows writing applications in high languages such as C++. Yet, several unsolved issues still need to be solved that limit the full potential of the machines and make them not yet on par with conventional computers. To list several of them: how can these machines maintain a state the host cannot manipulate? Or how could the machines obtain a Wall Clock time if a trusted party can provide it?
Verifiable Delay Function (VDF)
First proposed by Dan Boneh, Joseph Bonneau, Benedikt Bünz, and Ben Fisch in 2018 [Verifiable Delay Functions], the idea behind the VDF is a function under which execution time can be estimated well. Additionally, after executing the function, one can create proof that they have calculated the function correctly so that the other party can verify that the execution took place. The key here is that the verification part does not require recalculating the whole function but instead can be achieved with very few computations, usually logarithmically, in the complexity of the original function. The function can be well estimated by calculating the exact number of computations required. As the VDF functions deliberately require sequential computations, one can reason about the number of CPU cycles it would take to compute a function.
Additionally, as the CPUs can be only limitedly overclocked and there is a limit on how efficient the CPU can be [CPU-Z OC], one can reason about a lower bound on how quickly one CPU cycle can last. This means we could use a concept of an Estimate Quickest CPU Cycle Time with some error bounds. Thus, we could have a lower estimate on the amount of time the execution of a function, such as a VDF, took by estimating the number of required CPU cycles, which we would multiply by the Estimate Quickest CPU Cycle Time. Such system construction implies that even a powerful adversary, which can overclock or design its own CPUs, cannot calculate the VDF quicker than the VDF lower time-bound. We must note that the proposed design requires one to over-compute the VDF. This also implies that when challenging someone to prove the time duration, we would need to take the lower computation bound of the VDF. Thus, unless someone is an adversary, they would need to over-compute, as, on a conventional CPU, the computation time will be higher than the lower bound. Yet the difference would be no more than 50%.
There have been several VDFs proposed [A Survey of Two Verifiable Delay Functions], and VDFs are already being used to secure blockchains, like Solana [Proof of History], where they provide proof that the required amount of time has elapsed between blocks. The current constructions also have verification logarithmic to the time the VDF attempts to prove to have elapsed. Yet the design of a VDF whose verification would be constant is still to be solved.
The idea of a VDF has already proved to be useful in other use cases as well. For example, it has been shown how to construct a decentralised random lottery system, such as [Unicorn], in which security is based on a commit and reveals as well as VDFs.
Applications
Proving Timeout
Our first use case is proving that an application running inside the Confidential Computing unit has been blocked while waiting for something. This might happen when, for example, the application depends on external parties' input to progress the Enclave to the next stage. If these parties go down, the application will get blocked while waiting for their messages. In such situations, a recovery mechanism is essential so the system can keep operating. But before we can use the recovery mechanism, we must identify when such problems occur. One could trust the host to provide this information when, for example, the waiting period for a response has timed out. Yet this makes us blindly trust the host that could abuse this. As an alternative, VDFs could be used. The host can prove that a predefined amount of time has passed while waiting for a particular message. Meaning we would know that the message has timed out. An example of how this could work is a system that depended on constant pings from another security system to do specific operations. If the security system goes down, with proof of timeout, we could provide an alternative root of action if, for example, the security system is still unavailable after a day. We must be cautious of the host's ability to deny messages, yet there are ways to circumvent it, such as public broadcasting the ids of sent messages.
Replay Protection
This use case expands on the previous use case. Another way the time duration could improve Confidential Computing applications is added protection from replay and denial attacks. It would make these attacks take much more time and make attacks easier to detect. Today these attacks can be carried out almost instantly. The host can model the application state if a message is denied or if a set of messages are reordered. This way, the host can gain insight into the secret information it should otherwise not have.
VDFs could make the host take much longer to do this. For instance, let us look at an auction that lasts one week for submissions before announcing the result. To make a replay attack on such an auction, one would need to have a machine running for seven days to evaluate reordered messages, making it highly inefficient and time-consuming. This would imply that the host must stop accepting new submissions until the replay attack is finished.
The same applies to the case of user polls. There, insufficient user poll submissions could cause the disclosure of individual response data. This could be used by the attacker, who could trick the application by saying that the time to close the poll has already come, even though it has not, and extract the information as only a few results have been received. The attacker could then replay all the messages and restart the poll to the point where the vote has been closed. VDFs would solve this by increasing the window to continue the poll. This gives time for poll submitters to notice that the system is down and something malicious is happening.
Previous Work on Securing Time
Here we list of few alternative approaches considered or used to get time into Confidential Computing systems. Here we will explore the works by Fatima M. Anwar and Mani Srivastava [Applications and Challenges in Securing Time] and by Fatima M. Anwar, Luis Garcia, Xi Han and Mani Srivastava [Securing Time in Untrusted Operating Systems with TimeSeal].
Global Time Service
This idea proposes a trusted global time authority with the power to create signed time stamps. Inside the Confidential Computing unit, we could verify the signatures of the time authority and thus be sure that the specified point of time has already passed. A simple version of this is a Newspaper Method [Authentication by Newspaper], where one uses newspaper articles to prove that something has happened on or after a specific date instead of time stamps.
This approach, however, has several disadvantages. First, it implies that we need a trusted party that will run as a trusted global time authority. This beacon should always be up and broadcast to all Confidential Computing units so they can get proof that some time has passed. This implies that if the system is offline or does not have a connection to the trusted global time authority, it will not be able to operate. We must add that having another trusted party is not that detrimental. After all, most Confidential Computing solutions already rely on a central party to provide attestation services to verify their integrity. Yet, the fact that the system must constantly be online is problematic.
Second, the proposed method does not allow for establishing the current time. We cannot check the amount of time that had passed between the creation of the timestamp and when the Confidential Computing unit read it. Even worse, we could not have a concept of elapsed time using the proposed method. The host can stage the timestamp messages and provide them quickly, one after the other, creating time-duration attacks.
Overall, as far as we know, this method is not used for time inside Confidential Computing units.
Secure Hardware Clock
The idea is to create a Time Chip that would work by relying on a quartz clock with additional security to prevent the Time Chip from being physically manipulated. The Time Chip retrieved the amount of time elapsed from the last call to it. The Time Chip's security relies on the following. First, a separate battery prevents the host from turning off and resetting the chip. And second, the CPU checkpointing every interaction it has with the clocks. If the clock is restarted, the last checkpoint will become invalid, and the CPU will detect that the Time Chip has been manipulated and take the defined action.
However, as shown in the Securing Time in Untrusted Operating Systems with TimeSeal paper, these chips were still exposed to some attacks, such as the host delay or replay messages between the Time Chip and the processor. This means this method could only prove the elapsed time rather than how much time has passed. Additionally, as this solution was based on hardware, some other side-channel attacks may allow the host to create invalid messages and prove invalid elapsed time. Furthermore, this Time Chip works only when the CPU is turned on and thus can only prove the period the CPU has been working for. This is an inherent limitation of the method, meaning it will not be able to prove larger periods of elapsed time before the first CPU start, as well as prove elapsed time when the system has been offline.
VDFs and Confidential Computing
As seen in the previous work section, bringing any notion of time into Confidential Computing units has proved challenging. Here we will propose a new solution to prove the amount of elapsed time inside the Confidential Computing unit and provide a description of a new VDF that can be built using the Confidential Computing unit. Our solution allows proving elapsed time from any moment the application desires to, something not previously possible in other solutions. We additionally show that our solution is secure and has no trust assumptions as long as the selected VDF is secure.
Solution
Proof Verification
For the rest of the section, we assume the host is malicious, wanting to trick the application inside the Confidential Computing unit somehow. At the same time, the Confidential Computing unit must be convinced that a certain amount of time has elapsed.
To be convinced, the Confidential Computing unit has a function that receives the amount of time that has passed, a seed, and proof that enough time has elapsed. The function verifies this and outputs either True or False, meaning that the proof correctly shows that the specified amount of time has passed.
The seed is needed to make it impossible to pre-calculate the VDF and trick the system. There are several strategies to generate seeds, depending on the use case:
- Use a checkpoint to prove that a certain amount of time has elapsed since it has been generated. For this, the Confidential Computing unit would generate a truly random seed and share it with the host. The benefit of this approach is that the Confidential Computing unit could do this at any point in time, such as when it has received a message or a specific event has happened. The poll example we provided earlier is an excellent way to see this in action. The checkpoint will be made when the poll starts, and the host will begin computing a VDF. Once the required time has elapsed, the host can send proof of VDF execution to the poll application, which will verify that the calculations have been performed correctly and therefore close the poll.
- Use a trusted timestamp to prove that the date and time have been reached. The timestamp with the signature over it is unpredictable for the host. Thus, we could use the hash over the timestamp as an initiating seed. As the host cannot obtain the signed timestamp in advance, we could trust that besides the proof of elapsed time, we are also getting the fact that a particular date and time have already occurred. An example of this could be a system where the host queries the timestamp from a trusted time source and then maintain a trusted time with no more interactions with the trusted time source.
Proof Generation
Now, we will examine how to generate proof that the required time has elapsed. As mentioned before, we will use the VDF functions with a seed defined by the Confidential Computing unit to prove that the amount has passed from the checkpoint.
Host Based
What first comes to mind is that, as VDF proof calculation does not require any trust, we could delegate it to the host. The host could run a process in the background on a separate machine. Once the proof has been generated, the host can submit it to the Confidential Computing unit to prove the elapsed time.
The downside of this method is that the host can use different processors to run the VDF proof generation; thus, we could only approximate the amount of time passed. Therefore, to have a strong guarantee that the required amount of time has elapsed, we would have to add overproof, perhaps by over 10x of the time necessary. This is because the malicious host can acquire a CPU with extremely high frequency and, thus, try to win the system this way.
Confidential Computing Based
An alternative is to use the technology we are trying to improve. One feature of the Confidential Computing unit is that the Remote Attestation can prove which exact processor the unit is running on. This way, if the host was running a VDF inside of the Confidential Computing unit, we could verify which processor it was running on and, knowing how long an average CPU cycle in this CPU is, we could more accurately estimate time and reduce the required overhead from about 1000% to only 20%.
Yet, we could use even more features of the Confidential Computing unit to improve the solution further. Another attack vector of the host in proving incorrect elapsed time is to optimise VDF calculation. VDFs are sequential computation problems: hard to compute but easy to verify. However, one could devise a way to calculate and prove the problem VDF is based on easier. To circumvent it, we could use remote attestation to confirm that the exact preapproved algorithm was followed when evaluating the VDF.
Confidential Computing VDFs
At this point, one can see that we do not need any complex VDFs at all. We could severely simplify if we know that the Confidential Computing unit calculating the VDF is running the preapproved algorithm that cannot be manipulated. If the algorithm tells us that it has calculated enough Hashes (or another more efficient operation) to prove the required amount of time from a provided checkpoint, we can trust it. To know if we can trust it, it is enough to check the attestation of the Confidential Computing unit calculating the VDF. One can trust that the system is secure and runs an algorithm with defined properties.
This way, we build a new hardware-based VDF with constant verification time despite the amount of time it is proving to have elapsed. The big benefit is that if someone considers Confidential Computing units to be secure, they will also trust the Confidential Computing VDF output, as no new points of trust are introduced.
The Confidential Computing VDF can also be used for regular systems, proving its use as it has a constant verification overhead, is easy to develop and proofread and provides better guarantees compared to the cryptographic VDFs.
Additionally, the Confidential Computing VDF we have proposed will likely have a magnitudes better computation efficiency, as one does not need to run a CPU on full throttle but rather can use low-cost opcodes.
Limitations
The current system we propose has several limitations.
First, because we rely on estimations, we do not have a hard guarantee that the exact amount has passed since the checkpoint. We instead only show that the result should be within specific error bounds. However, we must note that the proposed Confidential Computing VDF makes these error bounds very marginal, almost reaching a hard guarantee.
We also must acknowledge that the host will be wasting computation power for no valid purpose besides proving that time has elapsed. If this period is long, this power consumption could be very significant, especially with the current focus on the environment. Yet we must acknowledge that it is the issue of all VDFs, and Confidential Computing VDF might prove to be more efficient by magnitudes.
We also must point out that our proposal does not say how the system could access the precise time. This is still out of the scope, and we only provide a way to verify that at least a particular time has passed from a point in time.
Future Work
Having described the new Confidential Computing VDF and how VDFs could work inside Confidential Computing units, the next step would be to put it all to the test.
We must test how much we can reduce the confidence bounds of the Confidential Computing VDF time proof and how low power consumption we can make it to be.
With that, the next step would be to compare its performance with the performance of alternative Cryptographic VDFs and see the actual performance improvements. However, we already project them to be more than tenfold.
Conclusion
In this paper, we have explored how VDFs and Confidential Computing can work together. First, we have proposed a system of how VDFs could be used inside the Confidential Computing model, enabling us to access the elapsed time inside of them. Second, we have proposed a new VDF based on Confidential Computing, which we expect to outperform the conventional VDFs in all aspects. We expect it to have drastically lower error bounds, have constant verification time, and require significantly lower power consumption during computation time. Both these contributions, particularly the second one, would help advance the study of VDFs and bring them to broad adoption.
Explore more articles
The latest news and announcements about Conclave.
Learn how privacy-preserving technology can enable a new approach to improve credit scoring.