3 min read
Explain it to me like I’m 5: Artificial Intelligence
There’s a ton of talk lately about artificial intelligence. Especially with the rise of ChatGPT, the term is making its way into daily news stories and conversations. Yet if you search for “artificial intelligence,” or “AI,” you get multiple similar-sounding, but slightly different explanations. While very few of them are wrong, the differences can still make it difficult to understand the basic concept.
Okay, so what is AI?
The simplest definition for AI is perhaps an oversimplification, but it’s important in having a real grasp: AI refers to computer technology designed to copy human problem solving, decision making, pattern recognition, learning, and perception.
If you do some digging, you will see other areas AI covers, but many if not most will fall into one of the above categories. For example, speech recognition is artificial intelligence, but technically has aspects of pattern recognition and learning associated with it.
What AI is not is perhaps as important as what it is: AI does not indicate self-awareness or sentience, though sophisticated AI responses and actions can seem eerily similar to what you might expect from another human. AI is also not omniscience. At the moment, AI is limited to the data sets made available to it, whether those are pre-existing — say for example, the archives of National Geographic — or data taken from interactions with people or other computer systems.
Additionally, AI is incapable of several important functions of human intelligence, such as empathy and intuition — which makes social interactions especially complex. AI is incapable of independently developing a moral and ethical framework. AI has great difficulty making inferences, again independently — it can mimic inference when programmed with specific responses to specific inputs, but this is not true inference.
Perhaps most importantly, AI isn’t yet able to think in a truly creative way. Humans excel at developing novel and unused approaches to problem solving by applying information gleaned from completely unrelated experiences.
What many people think of when they think of AI is what experts refer to as Artificial General Intelligence (AGI). This refers to broad, nonspecific ability to act and problem solve as a human would, including adaptability and true reasoning. Some argue self-generated emotional response must be included in AGI as well, which could include aspects of self-preservation and the development of preferences.
AGI does not exist as yet. Though AI systems are increasingly capable and complex, they are still a very long way from achieving Artificial General Intelligence.
Where does machine learning fit into the picture?
Machine learning is a subset of artificial intelligence. In machine learning, the program builds its own data set from the inputs it receives and is given instructions for. For example, several brands of thermostat can be trained to recognize the days, times, or conditions that lead to you changing a setting. Over time, they can learn that when you wake up on weekdays at 6 a.m., you like the temperature to be 73° in the house, and automatically adjust. The system takes into account multiple factors: day of the week, time, and current temperature; and makes a decision about what to do. That’s the decision-making part. What input you provided regularly under those conditions is the machine learning part.
Artificial intelligence exists without machine learning, but it tends to be limited to very specific tasks as a result. For example, several pattern recognition activities such as optical character recognition are technically AI. In the past however, these systems were generally not capable of learning to identify characters in fonts or writing they weren’t trained on. And so machine learning was added, which led to handwriting recognition, for example. (Though maybe not with huge success in the beginning).
Is there an end goal in mind here?
Yes, and it’s not sinister or intended to turn the Earth into a desolate wasteland filled with cannibalistic metahumans.
Simply put, the end goal is to develop systems that supplement, not replace human intelligence, taking over tasks that are too menial, dangerous, or time-consuming for humans. Have you seen this incredible video of a Boston Dynamics robot running amok? Picture one of these with the added abilities we’ve discussed here.
“Robot, could you wake up the kids and get them ready for school?” Your robot knows the locations of your kids, understands what waking them entails, knows they need clothes set out, toothbrushes loaded with toothpaste, and their bookbags ready. Oh, and it learned yesterday that little Billy had homework last night, so it reminds him to check to make sure it’s completed and in his bag. Plus, it adapts to this new request by fitting it into the routine tasks it already had set out for it for the day, and is able to resume them once this task is complete. And later, when little Billy realizes he did NOT finish his homework and attempts to get the robot to do it for him, it understands the ethical implications (from human perspective) and refuses.
And Billy fails. That’s the end goal. Failing Billy.
Okay, maybe not.
We talk about this here obviously because Botkeeper’s automated bookkeeping platform employs machine learning and artificial intelligence to handle large numbers of transactions quickly, learning as it goes. It won’t wake your kids, but it will build capacity, and free up your bookkeepers to do even more for their clients. And we promise it won’t bring about the apocalypse. Ready to learn more?