What Makes a Human Different from Algorithms?

Most of our day-to-day activities depend on algorithms in the age where technology plays an integral role in almost everything that we do. Even the most basic internet lookups or the types of advertisements we receive in social media are all dependent upon algorithms. This increasingly greater influence of technology has made many question the boundaries of machines and humans, including where the distinction between humans and algorithms really lies on.

Discussions following this will be taken mostly from a podcast video by Hidden Forces which featured Hannah Fry, as well as her very own book titled Hello World: Being Human in the Age of Algorithms. Though it both sounds like they’re going to discuss the inherent differences between humans and algorithms, they mostly discuss the ethical dangers that algorithms cause.

As humans, we have this tendency to make a generalization of the things we see, the things we perceive, and the things we interact with. We tend to categorize things together based on how useful or not useful they really are, and how great of an authority they have. In this case, technological advancements are no exception.

 

Machine Learning Systems

I’m pretty sure almost everyone who owns a tech gadget has ever dabbled with artificial intelligence, whether they notice it or not. For example, we’ve most definitely dealt with virtual assistants in our smartphones like Siri, Google, Alexa, etc. These virtual assistants are merely the results of a machine learning algorithm, designed by engineers somewhere in Silicon Valley.

Unfortunately, none of these systems is fool-proof, just like any other machine learning algorithm. Before we dive even deeper into machine learning algorithms, understand that they aren’t like ordinary algorithms, say Linear Search or Binary Search. Algorithms like the latter are generally tweakable and understandable by the human creators, such that you feed them input, let them process the input, and finally receive their outputs.

On the other hand, machine learning algorithms are like black boxes, whose knobs and dials are commonly incomprehensible to the human being. It sounds weird, but it’s the inevitable truth that the machine learning algorithms of today are hardly interpretable despite being good at what they are tasked with.

That said, these machines may not always output the correct result if we feed in different types of inputs. Unlike, say binary search whose results are deterministic, machine learning algorithms’ output may not always be predictable – even to its creators. 

 

Algorithms: Slaves or Masters?

With that knowledge, virtual assistants like Siri might recognize their user’s inputs incorrectly, hence resulting in an inaccurate response. You’ve probably experienced this scenario, where you ask your virtual assistant about the weather today, and they fail to grasp what you’re saying. If so, perhaps you think that it’s better to Google Search the weather today yourself, instead of using voice recognition.

Anyway, if these systems make the tiniest of mistakes, we humans tend to label them as unintelligent, or dumb. In doing so, we feel as though we are superior to them, and that they’re our “slaves” whose intelligence is way below that of ours.

Artificial Intelligence. Eric Chow.

On the other end of the spectrum, if we have access to a perfect system that responds to our inputs accurately 100% of the time, we tend to deem these machines as superior, or of higher authority, than we are. Instead of seeing them as unintelligent, we tend to overthink them as our future masters, and such that we’re as if destined to be controlled by machines someday.

 

Lack of Interpretability

I vividly remember one of my high school friends who once asked me whether or not The Terminator or Sophie is going to take over the world, to which I replied: “I don’t know.” Being the strongly opinionated person that he is, he wouldn’t accept the “I don’t know” answer and I had to explain to him how these systems are very much unpredictable.

Machine learning engineers are different from, say, software engineers because they might not be able to fully explain why their creation does not accord to the afore-predicted results. Perhaps this is why my friend didn’t buy into the argument of why machine learning systems are uninterpretable since most of the day-to-day algorithms seem to be highly understandable and “makes sense.”

Take, for example, a Google search. It’s no surprise that the search algorithms that Google has use machine learning to arrange the order of appearance. To the mainstream public, it seems simple: type in keywords and see which results have the greatest occurrence of the keyword. Unfortunately, a simple Google search is not that naive of a system.

Behind a Google search are tons of decision making done by algorithms, to which people are racing to exploit and have their websites be on top of the list, for the same reason people study how to optimize search engines like Google.

This very same difficulty of understanding how machine learning systems break down inputs into outputs is still an active area of research, as even the experts in the field might not fully understand how their own black boxes work. We know how this black box operates and learns, but we might not necessarily deduce how they perceive things in the process. 

 

Reasoning

Due to this problem of lack of interpretability, the real-world activities which rely on machine learning systems are bound to make mistakes. These mistakes are as dangerous as algorithmic biases that machine learning systems can exhibit. I have discussed the dangers of algorithmic bias and machine learning systems more thoroughly in a separate article which you can read here.

Nonetheless, what machine learning systems are thus bad at for the time being is their ability to reason. As much as they’re great at multiple tasks like recognizing faces, segmenting images of objects, translating sentences, they might not inherently have the capability to reason when making decisions. 

A l’école (At School). Jean Marc Cote, 1901.

Even if these systems will one day learn how to reason, we humans might not be able to interpret their reasoning, just like how we can’t “decode” how they were able to distinguish Albert Einstein’s face from Isaac Newton’s. 

The bad news is, machine learning systems have been injected into real-world tasks that rely on sensible reasoning. Tasks such as determining whether a person is infected with Pneumonia based on Chest X-ray images, classifying whether a person is bound to make criminal offenses, and more, have all started to rely on machine learning systems whose reasoning are presently beyond a human’s capability to understand.

 

What Makes a Human, Human?

With that, we return to the question of what differs a human from an algorithm. At least for the time being, we can conclude that artificial intelligence, or machine learning systems, are better at specific tasks than us humans are. However, humans can reason, while artificial intelligence may be incapable of.

Perhaps someday we might just witness a completely perfect machine learning algorithm whose reasoning makes sense to humans, whose predictions are right on point, whose intelligence outplays its own creators. Yet, when we get to that point, can we still tell machines apart from humans? Is humanity doomed to be puppets whose actions are merely strings being pulled by machines? Or shall we take the more optimistic view of creating a mutually beneficial system together with machines? Until then, appreciate being a human.

 

Featured Image by NYU.

Wilson Wongso

Wilson Wongso

Indonesian Computer Science Student, Private Tutor, and Content Writer. At times I read, game, or watch. My boss calls me Jack of All Trades; I could not agree more.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.