In the last couple of weeks there have been a bunch of press releases about Microsoft’s soon-to-be-released quantum computing programming language, plus related articles from IBM, Google, and others. China is also getting press for using quantum effects to help secure satellite communication — wow!
The technology sounds fascinating, and it’s amazing that hardware and software for quantum computing is started to actually appear. I remember reading science fiction books as a child that mentioned quantum computing as a super-theoretical concept, and I never imagined it would manifest as reality so soon. I’m still trying to scratch the surface of understanding how these systems work, so my goal is to read, learn, and summarize my understanding about quantum computing in this space and how it will open opportunities for software developers. While I may never understand the mathematics fully, I don’t feel so bad — Bill Gates and Satya Nadella struggle with the concepts as well. I welcome any comments and feedback to help me improve my understanding.
My first instinct is to say that it seems like quantum computing will not completely replace classical computing — instead, they will co-exist. Classical computing is deterministic, which is wonderful. I want to know that the e-mail I send will get delivered. Quantum computing, on the other hand, seems to deal with probabilities. It seems better suited to modeling and simulation of real-world phenomenon, or areas where some fuzziness or randomness are beneficial (like cryptography). Many articles state things like “N qubits can exist in the superposition of all 2^N states at the same time“. But you don’t know which state they are in until you “see” one (or more?), at which point via entanglement you can see what states the other qubits are in. How that helps computationally, I don’t quite grasp yet, nor do I truly understand the practical implications of quantum computing (disregarding all the marketing-speak)…so more reading for me!
MIT recently announced a 10-year partnership with IBM, centered around artificial intelligence and the Watson platform. Days before the announcement, STAT News published a report that discussed the challenges Watson faces in oncology, in actually helping physicians and patients. One of the big challenges is training and maintaining Watson’s knowledge base. Lack of data for training is also mentioned as a significant challenge in an MIT Technology Review article earlier this year about Watson.
Data science has become a huge buzzword, and definitely with all the huge amount of data available, many opportunities exist to gain new insights. But machine learning / artificial intelligence / deep learning (whatever you want to label it) all face the same challenges that they have always faced. Bias in training data leads to incorrect results, like with Google’s image tagging blunder in 2015. Lack of high-quality, “big enough” data sets make training difficult, as seen with Watson for Oncology. These challenges stem from the basic mathematics behind neural networks and other machine learning techniques — which all rely heavily on the training data input into the algorithms. To allow computers to learn like humans and be successful in different contexts, IBM’s Deep Mind is trying to make computers more like human brains. More advanced research similar to this, would be amazing to see out of the MIT / IBM collaboration, and I can’t wait to see what the researchers develop.
Various sources are reporting that Lyft may receive a significant investment from Alphabet. This would certainly be interesting in the autonomous vehicle space, since Lyft currently works with a variety of other autonomous vehicle companies (Alphabet owns Waymo, one such company). Forbes reports that they currently have five such partnerships. Lyft seems to value partnerships and collaboration, so it will be interesting to see what such a large investor does to the equation. I can imagine cancelling the other partnerships or even forming some sort of autonomous-platform consortium, where they settle on some standards for integration.
I’m also not convinced that Lyft would use the investment just to attract new drivers or enter new markets. Historically, automating a task and using technology eventually becomes more efficient than manual labor, and if Lyft is taking the long view, they would be better off investing the $1 billion into developing autonomous vehicle technology rather than burning cash to attract human drivers. Don’t compete head-to-head with Uber over market share for ride-hailing; rather tackle the big problem that will revolutionize the entire transportation industry.
With the release of the iPhone X comes a lot of analysis about the security of FaceID. I have a couple of more mundane usage questions, with the caveat being that I didn’t watch the whole keynote…
- How does it handle facial accessories:
- How does it work if I put on my sunglasses? Or readers?
- What if I normally wear glasses, but get a new frame with a different shape?
- What if I’m skiing or snowboarding (or pick your winter sport), and wearing my goggles / a ski mask / a scarf, but want to use my phone?
- What if I get a new ear piercing or nose ring or something?
- What happens during Halloween?
- Does facial hair affect it? What if I grow a beard or goatee, or even just get some stubble? What will happen to Movember??
I think these questions are similar to the wrist tattoo interference problems with the original iWatch, where real-world corner cases were not tested before product release. It will be interesting to see what kinds of real-world challenges FaceID encounters or handles well.