Bundling macOS Applications with Platypus

So I know that Apple is pushing all app developers to the App Store, but I’ve recently been playing around with Platypus for the Connected Learning Initiative project that I’m working on. I was curious how to bundle together a macOS application, but didn’t want to go through XCode or the Apple Developer Program.

Platypus is pretty nifty because it organizes all of the files and scripts for you, into the appropriate .app folder structure. There were a couple of things that I couldn’t find in the documentation, however, that I wound up having to figure out:

Our bundle actually consists of running two executables on program launch (one that acts as an assessment engine, another as an ePub reader). However, you can only launch one script when you open the app. I wound up creating an AppleScript to open up two instances of Terminal and running one executable per instance. And apparently spaces in Terminal have to be double-forward-slash-escaped instead of single (i.e. \\ instead of the typical command line \):

#!/bin/bash
EPUB_READER_SCRIPT=`pwd`/epub_reader_executable
EPUB_READER_SCRIPT=${EPUB_READER_SCRIPT// /\\\\ }
echo Running ePub reader from $EPUB_READER_SCRIPT
osascript -e "tell app \"Terminal\" to do script \"$EPUB_READER_SCRIPT\""
sleep 3

ASSESSMENT_ENGINE_SCRIPT=`pwd`/assessment_engine_executable
ASSESSMENT_ENGINE_SCRIPT=${ASSESSMENT_ENGINE_SCRIPT// /\\\\ }
echo Running assessment engine from $ASSESSMENT_ENGINE_SCRIPT
osascript -e "tell app \"Terminal\" to do script \"$ASSESSMENT_ENGINE_SCRIPT\""
sleep 3

/usr/bin/open -a "/Applications/Google Chrome.app" 'https://localhost:8888/'

One other challenge was finding bundled data files. I was including static files, ePubs, and JSON data files in the application bundle, since it is meant to be a standalone LMS / learning application, and I couldn’t figure out why my scripts couldn’t find the data. The Platypus documents indicate scripts in the app bundle can reference other files in the bundle relative to themselves. The directory structure inside of /Content/Resources was:

main_script.py
database.sqlite3
data_files/
    foo.json

However, because I was launching new Terminal instances, the Python scripts inside of your .app bundle had the current working directory set to the user’s home directory (Terminal default). For example, in my main_script.py file, os.getcwd() would return the user’s home directory (/Users/) instead of the app’s /Content/Resources directory.

I wound up having to do something like this in my main_script.py:

import os

ABS_PATH = os.path.dirname(os.path.abspath(__file__))
os.chdir(ABS_PATH)

in order to find and read files relative to my main script.

Exploring Quantum Algorithms

I’ve come to realize that making the best use of quantum computing in the future requires knowledge of quantum algorithms, so I’ve been focusing the last couple of weeks on better understanding those. It’s interesting that there already exist many such quantum algorithms that take advantage of quantum mechanics. To me, it seems like quantum programming will require knowledge and experience of when to use which algorithm, like machine learning programmers need to know when it is more useful to apply different algorithms, depending on what data is available or the intended goal. I’ve been wanting to read this book by Ike, an MIT professor, so waiting for the term to end so that the library copies become available. At least I’ll make an effort to skim it…

I’ve also been reading through the IBM Jupyter notebooks on how to use their qiskit, specifically the one for optimization. In there, they talk about how you could use quantum computing to solve the traveling salesman problem, a classic one in optimization research. It’s interesting that the brute force method for N > ~20 is unsolvable on traditional computing, but if I understand the IBM notebook correctly, it could be brute-forced with (N - 1)^2 qubits — so for example, if N = 20, that would be 361 qubits. Assuming the challenges with error correction as the number of qubits increases can be overcome, that seems like it might be a year or two in the future.

Until I read the Wikipedia article on the traveling salesman problem, I also didn’t realize that people have come up with alternate algorithms that can tackle up to 85,900 cities — amazingly better than the brute force method. While using traditional computing to implement these algorithms seems CPU intensive (i.e. using a supercomputer), it also boggles my mind that to solve this on a quantum computer, you might need 84,999^2 (7.2 trillion) qubits?!?!?! Given the state of technology, that seems decades away or impossible …. it makes me wonder if there are also alternate, quantum versions of these non-brute force methods that would make the required number of qubits more realistic. Kind of like there is a quantum Fourier transform to mirror the traditional discrete Fourier transform. More reading required!

Quantum Programming Languages

So the big news today in quantum computing came from IBM — here is the MIT Technology Review’s announcement, plus Tech Crunch’s. Basically IBM has prototyped a 50-qubit machine, and they are allowing the public to use a new 20-bit machine via their QISKit library. In the past, they allowed the public to use 5-qubits and researchers to register for 17-qubit access, but now everyone will have access to 20! It will be interesting to see what additional research folks can carry out now…I recall reading that there have been ~30 papers published using the IBM system, so researchers should be able to do every more things now.

I have also been spending some time reading through the tutorial Jupyter notebooks for QISKit … wow. The library still seems to assume a strong grasp of quantum mechanics. It allows you to do things like set gates and levels in the quantum circuits, which is pretty cool. But you have to understand how to calculate the energy equation for your problem in the first place (and then, how the combination of gates defines your energy function — this, I think can be codified). I would love to see either or both of:

  • More tutorials on how to construct quantum algorithms from real-world problems. Ising what? Do I need a Ph.D. for this?
  • Libraries that abstract away the quantum weeds. So I don’t need to manipulate and set the individual gates, the possible combinations should be abstracted away by some higher-level command. I feel like this is simpler than the first bullet, honestly.

Quantum Cooking

Quantum computing definitely seems like a black box, and I was trying to find a real-world comparison. Here is how I see it:

Traditional cake recipe (courtesy of AllRecipes.com):

  1. Preheat oven to 350 degrees F (175 degrees C). Grease and flour a 9×9 inch pan or line a muffin pan with paper liners.
  2. In a medium bowl, cream together the sugar and butter. Beat in the eggs, one at a time, then stir in the vanilla. Combine flour and baking powder, add to the creamed mixture and mix well. Finally stir in the milk until batter is smooth. Pour or spoon batter into the prepared pan.
  3. Bake for 30 to 40 minutes in the preheated oven. For cupcakes, bake 20 to 25 minutes. Cake is done when it springs back to the touch.

Quantum computing cake recipe (as I understand quantum computing):

  1. Place eggs in a single-file line 6″ from the left of the countertop and 1″ away from you. Make sure the odd-sequenced ones are all aligned North-South and the even-sequenced ones are aligned East-West.
  2. Scatter flour into a 12″ by 8.43″ rectangle, exactly 2.58″ above the eggs.
  3. Levitate half cup of melted butter 7″ above your head.
  4. Stick your left thumb into your right ear and hop on one foot.
  5. In 5 minutes and 32 seconds, check your oven. There is an 80% chance you’ll find a cake, and a 1% chance you’ll find a squirrel. You won’t know until you check. Repeat this process 1000 times and you’ll figure out if that is really a recipe for cake, a squirrel, or nothing.

Qubits are not Bits!

I had a revelation in the last couple of days — qubits are not bits. Shocking right? Let me explain.

In traditional computing, everything you see is a 0 or a 1, because everything is eventually compiled down to a bit representation. Groups of bits are used to represent instructions and information. When a programmer types in something like result = 1 + 2, all of that is converted into machine language — 0’s and 1’s. The set of bits that represent instructions are well-defined for each chip, like for your mass-market Intel x86 processors. The values (bits) used in each operation are then manipulated and stored in memory, to be used later.

In quantum computing, people talk about how qubits are 0, 1, or both at the same time, and I’ve had this misconception that qubits just run the exact same instructions with the same set of information as traditional computing — except that all the instructions run simultaneously. But that’s not the case (as this article from Chemical & Engineering News explains very well). Qubits represent the state of things, but not instructions or information. When you program with quantum computing, you set the qubits into a specific, defined state, and let them naturally (i.e. magically) settle into a final, resting state. You don’t program in explicit instructions on how that initial state turns into the final state, you just set up the right constraints on the system (how various qubits are tied together, for example, or what gates are used to influence the qubit). So when people talk about programming quantum computers, the program instructions exist outside of the “quantum” part. Like this paper from Oak Ridge National Laboratories explains, a quantum architecture involves both traditional computing and what they call “quantum accelerators”.

A highly simplified comparison might be when you fill up your car with gas, you fill it up until the nozzle disengages, because it senses that the tank is full (quantum programming — the state speaks for itself). You don’t think to yourself, I need to fill it up with 5.182 gallons of gas to get it full (traditional programming — you specify everything).

Improved Understanding of Qubits (Maybe)

One of the comments on my previous post about the physical representation of qubits prompted me to dig a little deeper into existing resources, to see if I could better understand how quantum computers reach low-level states naturally, and how that affects the programming aspect. While doing research on that, I think I’ve clarified a couple of things for myself.

First, I found this great D-Wave whitepaper on how you would approach programming a quantum computer to solve a map coloring problem, compared to traditional computing. As they say, instead of explicitly programming the steps of the algorithms, you instead program in what I think of as the “error function” — the thing you want minimized. I find quite a lot of similarities here to machine learning, and I can see why people say that quantum computing could really help solve machine learning problems. Like in many machine learning algorithms, you try to minimize some error function (i.e. root-mean-squared error on a test data set). Even Figure 3 of a quantum cell (page 6) looks kind of like a neural network, especially if you squint. But one thing that I didn’t understand very well was the use of qubits in the cells. People talk about ~50 qubits as being able to perform things traditional computing cannot. Yet according to this whitepaper, if you extend the problem to all 13 Canadian states, solving this relatively straightforward problem would require 104 qubits (13 provinces * 1 cell per province * 8 qubits per cell), which seems excessive…so not quite clear if that is because this is a simplified example or dependent on the D-Wave architecture.

Second, I came across this great 2013 article from Scientific American that addresses some of the questions I brought up in my previous post about quantum parity. Apparently for at least two types of quantum systems (superconducting circuits and trapped ions), they seem to solve simple calculations similarly, though there are differences in speed and error propagation. With Microsoft introducing topological qubits, it will be interesting to see if it performs similarly to the other two technologies. One new term that I learned from the article is “quantum volume”, which encapsulates the idea that the published number of qubits does not reflect the actual processing power (what I referred to as quantum parity)! So increasing quantum volume will actually help solve more complex problems, instead of just the “raw” number of qubits.

So overall I think +1.5 for understanding this week.

Qubit Parity

One of the things I see mentioned in quantum computing articles is that qubits are prone to interference from outside sources — hence why they have to be kept cool, and why companies like Google want to understand how errors scale with the number of qubits. Recently Intel announced a 17 qubit quantum chip they delivered, with this interesting comment:

IBM’s latest chip also has 17 qubits. Having that number means QuTech will be able to run certain error correction protocols on the chip, Held said.

Since Microsoft is exploring topological qubits that they claim are more stable (and every company seems to be taking different approaches), it makes me wonder if there is “qubit parity”? And I think of parity in two senses:

  • Can we compare qubits from Intel against qubits from IBM, in terms of processing power or potential? If Intel has a 17 qubit chip, and IBM has a 17 qubit chip, would they be capable of performing the same calculations? Or because they use different types of quantum theory, would one be more / less “powerful” than the other? Or perhaps like GPUs and even more specialized chips, some types of quantum chips might be better suited to certain tasks than others?
  • Do 17 qubits really behave like 17 qubits, given that some are used for error correction, or would they behave more like, 14 error-free qubits? Basically, I wonder how much computational power you might lose to the error correction aspect, regardless of technical approach. Is it constant across technical approaches, say 10%?

It will be interesting to see how these factors play out in the market, as more chips become fabricated and used in various applications.

The Physical Representation of Qubits

As I’ve been reading more about quantum computing, I’ve been wondering what the physical manifestation of a quantum computer would be. I know what modern-day silicon chips look like, but I have a hard time imagining what a quantum computer would look like. So let’s start at the smallest component — a qubit.

As Wikipedia so helpfully describes, there are many representations of qubits. My simple take-away is that all of them involve very small things — i.e. a single atom, an electron, or a photon. But, these very small things require 1) supercooling (to reduce the noisy state of each qubit), and 2) lasers (to make measurements), and so while each individual element is small, the entire aparatus is large and very complex. Kind of like vacuum tubes and mainframes, in my imagination. As the technology advances, I wonder if it’s even possible for quantum technology to shrink like modern day transistors, since the atomic level is just so different, or if quantum computing will have to be made available through the cloud, only?

I’ve also noticed that different teams are experimenting with different types of qubits and quantum computing: Google and UCSB with superconducting qubits, Microsoft with topological qubits, etc. So it’s not yet obvious that there is a dominant technology or approach in the field.

After skimming the paper from Google and UCSB, I’m still unclear how qubits in general translate to computing work. It seems like after you measure the state multiple times, you get out a probability distribution that the qubit is in any given superposition (extend this to multiple qubits). So while a qubit can be in all of its superpositions at the same time (in real life), as soon as you sample it, you’ve digitized it, or effectively equated the qubit to a normal bit. And therefore, similar to how sampling analog music to create a digital representation loses data, I would assume that sampling qubits to figure out their state must also lose some (valuable?) data…given all the hype, I’m probably missing some key understandings about the field, so I definitely plan to read some more papers and articles. And maybe that is why error correction is so critical in quantum computing?

Quantum Computing — What is it?

In the last couple of weeks there have been a bunch of press releases about Microsoft’s soon-to-be-released quantum computing programming language, plus related articles from IBM, Google, and others. China is also getting press for using quantum effects to help secure satellite communication — wow!

The technology sounds fascinating, and it’s amazing that hardware and software for quantum computing is started to actually appear. I remember reading science fiction books as a child that mentioned quantum computing as a super-theoretical concept, and I never imagined it would manifest as reality so soon. I’m still trying to scratch the surface of understanding how these systems work, so my goal is to read, learn, and summarize my understanding about quantum computing in this space and how it will open opportunities for software developers. While I may never understand the mathematics fully, I don’t feel so bad — Bill Gates and Satya Nadella struggle with the concepts as well. I welcome any comments and feedback to help me improve my understanding.

My first instinct is to say that it seems like quantum computing will not completely replace classical computing — instead, they will co-exist. Classical computing is deterministic, which is wonderful. I want to know that the e-mail I send will get delivered. Quantum computing, on the other hand, seems to deal with probabilities. It seems better suited to modeling and simulation of real-world phenomenon, or areas where some fuzziness or randomness are beneficial (like cryptography). Many articles state things like “N qubits can exist in the superposition of all 2^N states at the same time“. But you don’t know which state they are in until you “see” one (or more?), at which point via entanglement you can see what states the other qubits are in. How that helps computationally, I don’t quite grasp yet, nor do I truly understand the practical implications of quantum computing (disregarding all the marketing-speak)…so more reading for me!

MIT and IBM Watson

MIT recently announced a 10-year partnership with IBM, centered around artificial intelligence and the Watson platform. Days before the announcement, STAT News published a report that discussed the challenges Watson faces in oncology, in actually helping physicians and patients. One of the big challenges is training and maintaining Watson’s knowledge base. Lack of data for training is also mentioned as a significant challenge in an MIT Technology Review article earlier this year about Watson.

Data science has become a huge buzzword, and definitely with all the huge amount of data available, many opportunities exist to gain new insights. But machine learning / artificial intelligence / deep learning (whatever you want to label it) all face the same challenges that they have always faced. Bias in training data leads to incorrect results, like with Google’s image tagging blunder in 2015. Lack of high-quality, “big enough” data sets make training difficult, as seen with Watson for Oncology. These challenges stem from the basic mathematics behind neural networks and other machine learning techniques — which all rely heavily on the training data input into the algorithms. To allow computers to learn like humans and be successful in different contexts, IBM’s Deep Mind is trying to make computers more like human brains. More advanced research similar to this, would be amazing to see out of the MIT / IBM collaboration, and I can’t wait to see what the researchers develop.