Quantum Computers Will Make Short Work of Big Data
Depending on which website you trust and your faith in statistics, about 90% of all the data in the world was created in the last 2 years. That’s insane. We’re generating 2.5 quintillion bytes of data each day. At that pace, it’s no wonder we haven’t made much progress toward getting more out of this data deluge. We have more data then we know what to do with, and almost no way to process it. The best part: our rate of data creation is growing almost exponentially. For example, in 2016 we sent a little more than 3.5 million text messages per minute. In 2017 so far, we’ve sent over 15 million texts per minute. 15 million!
This is where the real power of quantum computing will shine. Few doubt there will be an increase in processing speed in the quantum age. John Preskill coined the term “quantum supremacy” to describe the potential for quantum computers to solve problems that classical computers can’t, because they’re not powerful enough or fast enough. (Rumors that Google would demonstrate quantum supremacy by the end of 2017 were slightly exaggerated.)
Not By Speed Alone
Alone, this fabled increase in speed doesn’t mean we can celebrate the end of the world’s computational problems. But this type of speed does dramatically change how we interact with big data. Mostly because it allows us to execute quantum algorithms.
Imagine you have a database of financial data (weather data, genetic data, etc.) with 100 quintillion entries. A classical computer would take an impractical amount of time to search for an item in that data set. It would have to look at half the entries. Now imagine you wanted to do something interesting with that data, like analyze it. You would need to pay big bucks for lots of time on a modern-day supercomputer.
But if you had access to a quantum computer, you could use Grover’s algorithm to get a quadratic increase in speed. You would only have to look at 10 billion entries to find the desired item. And the speed benefit you would see on searches would increase with the size of the data set.
But Wait, There’s More!
Quantum computing doesn’t just have the potential to improve search speed. Research suggests that we may see an exponential speed increase in big data classification and topological analysis of complex data sets. Both of these cases involve applying quantum computing to existing machine learning systems (foundational for artificial intelligence, by the way).
Researchers at MIT and Google have demonstrated mathematically that a support vector machine based on existing machine learning algorithms can be implemented on a quantum computer to obtain an exponential speed increase in data classification and regression analysis. One of those same big brains at MIT, Seth Lloyd, collaborated with researchers at USC and the University of Waterloo to propose a new theory on how to execute topological analysis using a quantum computer. Topological analysis is for analyzing large data sets that have lots of dimensions, lots of holes, and lots of noise. Sounds useful, right?
Give that Man a Cookie
I’ll let Seth explain the potential benefit to you in his own words:
If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require a computer the size of the universe. That is, it would take 2300 (two to the 300th power) processing units—approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way.
That’s where our algorithm kicks in. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years.
You can see the potential; it’s like a drug. I know a quantum computer doesn’t exist yet. But when it does in a year or two, think of what we’ll learn from the hundreds of exabytes of data we’ve already generated!