A Simple Explanation of Confidential Computing: Part One
Reimagining the padlock in your browser
You’re probably reading this article in a web browser. And there’s probably a little padlock on the address bar somewhere. This is how you know you’re “secure”.
We’ve all been trained to “check for the padlock”. But how many of us ever think about what it means and what protections it provides?[/caption]
But have you ever stopped to ask yourself what that actually means? Secure in what way? What does that padlock actually represent? What protection is it giving you? What bad things could happen to you if the padlock wasn’t there?
And in any case, isn’t there a padlock when you browse to sites like Facebook? And yet aren’t they appearing in the news every day accused of “selling” or “misusing” your data? How can they do this if they have the padlock and the padlock means it’s “secure”?
The answer, of course, is that the padlock is there simply to ensure you really are logged in to Facebook and not some other site. And it ensures that nobody can intercept your private information as it flows back and forth between your computer and Facebook’s data centres.
That’s important, of course. But notice what that padlock doesn’t do. That padlock doesn’t tell you anything about what Facebook will do with your data. You just know you’re sharing your data with them and not somebody else.
But imagine if the world worked differently. Imagine if there was a different type of web browser. One where the padlock wasn’t only there to confirm who you were sharing your private information with but one where the padlock helped you control exactly what they could do with your data. Isn’t that actually what the world needs?
And it’s not just users of social media who have this problem. Large firms do too. Traders want to buy and sell stocks for the best prices in the most liquid venues. But they don’t want the operators of those venues using their orders to trade against them.
It’s as if they want a “padlock” around their orders that means the data can only be used for matching purposes and not to give the market operator an unfair advantage. We need a way to make that stock exchange somehow “tamperproof”, where not even the operator of the market can gain an unfair advantage.
It turns out that pretty much any time multiple firms need to transact with each other they have this dilemma: they need to share some information in order to do business but they’re paranoid about what their counterparts might do with this information.
But what if we could do better? What if there was a way to send data to somebody and be able to control exactly what they could or couldn’t do with it?
It turns out that such a thing is possible. And it’s made possible by a concept called Confidential Computing. The key idea is this:
Confidential Computing makes it possible to run programs on somebody else’s computer but where the owner of that computer can neither influence nor observe what’s happening.
And it’s this concept we need in order to imagine padlocks on browsers that tell you what will happen to your data, not only who you’re sharing it with. It also makes it possible to keep mobile phones secure and even run sensitive workloads in the cloud.
And it will also enable us to build provably fair markets, enable secure multi-party fraud analytics solutions, and change the economics of market data services, and more.
But hang on… scroll back to that definition I just gave. I said Confidential Computing lets us build computers whose owners no longer fully control them, right? Who would want that?!
“Making it impossible to fully control your own computer” might sound weird to some readers, especially anybody who feels like that’s how their present computer works! If you’ve ever forgotten the password to your laptop or been unable to open a protected Excel spreadsheet, it might feel like today’s computers do a pretty fine job of acting like they have more power over you rather than the other way round.
But the reality is that somebody in control of a computer can do what they want. They can change what the programs do and they can inspect all the information they’re processing. And this is why when you send data to a website or other service, you’re totally reliant on the honesty of the firm with whom your interacting. There is nothing technological that constrains what they can do with your information.
And it is this problem that explains why the padlocks in today’s browsers work the way they do. Once you send your information to Facebook’s computers, there is literally nothing your browser can do to control what happens to it. Facebook operate their own computers and if they want to change what their algorithms do then they’re free to do so and you would never know.
But, with the emerging world of Confidential Computing, we can begin to imagine a world where that padlock is so much more meaningful… a world where it does indeed tell you what will happen to your data, not merely the identity of the megacorp with whom you’re sharing it.
And the fundamental concept upon which all this rests is the idea of running computations on a computer in a way that is protected from the owner of that computer attempting to subvert them or see what they are doing.
In part two of this series, I talk in more detail about how this surprisingly subtle concept is at the heart of your mobile phone’s security and how it might make even the most conservative firms get comfortable with moving to the cloud.
But that’s not the most interesting bit. I also show how Confidential Computing could be about to unleash a wave of new ‘tamperproof application’ that could indeed transform stock trading, fraud analysis, market data services and more.
R3’s Confidential Computing product, Conclave, is in Beta. Conclave is the highly productive way to build ‘tamperproof’ services: write in Java and develop on any platform.
Explore more articles
The latest news and announcements about Conclave.