Artificial intelligence is not one thing.
Artificial intelligence is not an algorithm. An algorithm is a set method for completing a task. Typically, we talk about algorithms that are implemented by a computer and written in computer code. But algorithms can also be written in math, like the quadratic formula or the equation to calculate area of a circle; or they can be written in natural language, like a chocolate chip cookie recipe or instructions for assembling a desk.
Artificial intelligence is not a model. A model is a set of mathematical formulas that is trained on data to create predictions and derive conclusions by looking at the data and finding patterns. To train a model, a task is assigned on a massive amount of data. This might be picking out all the kitten photos out of 5 million animal pictures. Or it could be identifying whether people are looking for the state, the US capital, or the historic figure when they type in “Washington.” The model then builds its own process as it attempts to complete the task and work through the data. This is called machine learning. Machine learning is often called a black box. The term black box is accurate but incomplete. Imagine you put a child in the middle of the woods a mile from her house, and tell her to find her way home. You attach a GPS beacon to her so you can track her progress. Now imagine you did the exact same experiment but with a dog. The dog takes a very different path home than the girl, but still completes the task. Through the tracking data you can see perfectly the two routes taken: one by the girl and one by the dog. You know how they traveled home. But, unlike the girl, the dog cannot tell you why it made the choices it made in selecting its route. Like the dog, machine learning is unable to provide reasons and explanation for how it does things.
Artificial intelligence is not robots. It can be robots, but it is not just robots. Robots are machines built and designed by humans that do automatic tasks. If something is automatic it means that it performs a programmed process. That programmed process implemented in a robot is written and designed by humans. Some robots today contain some aspects of artificial intelligence, but most robots are still just repeating the same immutable actions.
Artificial intelligence is not neutral. Artificial intelligence is built by humans and those humans write algorithms. Those humans are often men and they are often straight and they often come from privileged backgrounds. Often, the data that these humans use to train the algorithms they write to build artificial intelligence is based on a population that is often randomly gathered from what is available, which might end up looking a lot like the humans who wrote the algorithms. This leads to various problems with artificial intelligence when it confronts humans that don’t look, behave, or present in the same way as the humans who wrote or were used to train the artificial intelligence. This is not necessarily the fault of those humans, and this does not mean that they should be excluded from work on artificial intelligence. Humans, like machines, are flawed. But humans, like artificial intelligence, can try to learn from mistakes. Humans can build better artificial intelligence if they have humans build it that look like all different kinds of humans and train it on human data that represents all different kinds of humans.
Artificial intelligence is not the singularity. The singularity is a term that refers to a hypothesized moment in the future in which technological growth and machine intelligence can no longer be controlled by humans. Science fiction authors have written about the singularity for a long time. Philosophers, ethicists, technologists, and people with blogs have devoted a lot of energy and time to fearing or not-fearing the singularity. The singularity might never happen. Or it might. But if you are in a sinking ship and taking on water, it might be better to spend your time on pumping, fixing holes, and finding lifeboats than worrying about a pirate attack. So too is it perhaps more prudent to spend time on the urgent and knowable problems of AI than those imagine ones that might not ever come to be.
Artificial intelligence is not, maybe, a distinction that will matter. Artificial intelligence is a term in contrast with natural intelligence, which is how we think of the intelligence displayed by humans and sometimes certain animals. But if a poem is written by a computer using artificial intelligence and the poem is so beautiful it makes you cry, does it make the poem mean less if you know the source is “artificial”? If a robot uses artificial intelligence to listen to and talk to an elderly woman, does it make the affection that woman feels for the robot mean less? Do things mean more again if you remember that the poem computer and the robot friend were running algorithms written by humans based on data sets constructed from humans? And if it’s really all about what humans are building and how humans react to what is built, isn’t the true question not about what artificial intelligence is or is not, but what it could ever be?
Kate Klonick is an assistant professor at St. John’s Law School, where she teaches property, internet law, and a seminar on information privacy. Her work has appeared in the Harvard Law Review, the Georgetown Law Journal, the Maryland Law Review, and is forthcoming in the Southern California Law Review and the Yale Law Journal. She is on Twitter at @klonick.