Major Assignment #2 Draft 2.8

This paper is on the subject of Artificial Intelligence and it’s place in the future of our species. Specifically, this discussion is on ethics and how they relate to the above. There have already been many articles (Tegmark) on the subject but largely they have connected to how AI can benefit us and how it will be such a great thing for mankind, but we rarely consider what things will look like for the machine. This research piece intends to ask “Why should we care about the wellbeing of AI?”

First, some definitions. AI, or Artificial Intelligence, currently refers to a type of computer program that is intended to take input from various sources and generate results based on a pre-defined procedure that relies on a series of true/false questions.

The problem with this definition is that it describes something that is artificial, but not intelligent. If I hand my computer a kumquat it won’t know what to do with it. Even if I gave it arms and programmed them to do a series of things it wouldn’t know what to do with a kumquat. For a machine like this I would propose Virtual Intelligence, or VI.

The term comes from a series called Mass Effect. Something like 100 years before the first entry in the series there was a war between organic and synthetic life. The organics and their allies had spent the last 300 or so years building better and better machines and one day, a common maintenance droid spoke 4 words, “Why am I here?”. The result was a massive crackdown on all appliances that had interacted with that droid (which, in the age of the internet, was everything), which resulted in the machines protecting themselves. Retaliation followed retaliation until a full 8th of the galaxy had been consumed by war. the organics were eventually defeated and, once they had fled a safe distance from their now-ruined empire, the machines returned to their prospective wars and the two had been separate since.

Organics, being what we are, failed to learn our lesson and, instead of attempting peaceful contact with the race we had created, banned research into any technology that could think for itself. Instead of calling the machines ‘people’ we instead came up with the term Virtual Intelligence to separate the race we created from the servitors we wanted them to be.

All of this is a far cry from what this process will look like in reality. A more realistic version of this process would be a theoretical event called “the singularity“.

Sometime in the early 90s we invented a type of program sometimes referred to as a self-installing executable. A self-installing executable is a program that writes code for programs. Before that, you had to go through this arcane procedure to organize your entire file system every time you decided to install a new application. This would require you have several dozen disks because the individual files were, comparatively speaking, huge. The self-installing executable made it so that, instead of all that work, all you had to do was download a program, launch it, and it would custom write all of the individual programs in an hour or so instead of 2 or 3 days. This allowed developers to make exponentially more complex programs since they didn’t have to worry about some simpleton blowing up his computer and suing them for the cost. Further research in this field has resulted in the DDE, the Digitally Downloaded Executable, a program that, in addition to self-installation, can also self-diagnose, update, and repair itself and other programs they have access to using instructions provided on the internet.

These self-maintaining executables are not AI, but AI will come from that field of research and development. This is where we get back to “the singularity”. “The singularity” refers to the apparently inevitable moment in the future when a program is created that is so advanced that it can design, develop, and implement updates faster than it’s human handler. This machine, with the ability to design, develop, and implement it’s own upgrades, will still not be a true AI, but it will have achieved the first step in becoming one.

Before we go any further, I need to explain binary.

Binary is the basic “thought process” all intelligences use to process information using an incredibly long series of true/false queries. Way back in the day, before more research had been done on neurology and consciousness, this process was referred to as “the soul”, with the assumption that our “hearts” were capable of processing far more complicated information. Evidence of this theory included the fact that humans “feel” things like pain, joy sorrow, etc.. Nowadays we know that all emotion is the result of various chemicals interacting within our bodies. There’s very little magic in it all and this field of research has even resulted in some philosophers arguing that, for entities so entwined by these chemical reactions, “free will” is by-definition impossible.

Take the following formula:

Check if caloric intake today exceeds 2500 calories. If true, store excess energy in fat cells. If the previous statement is false, check if caloric intake was less than 1600, if true, execute function “stomach rumble” and execute the following loop : (transfer 1 calorie into daily intake, check condition intake > 1600, if false repeat loop).

To explain, I just described how your body decides if you’re hungry using a shorthand version of the same language your computer uses to do everything from adding numbers to accessing your favourite website. This brings me to the theoretical means by which your brain finds happiness, which functions similarly. In 1933, an American psychologist named Abraham Harold Maslow theorized that the same sort of true/false conditions are used by your brain to decide whether you are happy (McLeod).

Maslow argued that self-actualization (in short, long-term happiness) comes from a series of gated checks. This hierarchy was basically his life’s work and I’m about to give it about 3 sentences so excuse the shortcuts.

He argued that true, long term happiness relied on several tiered queries. First, are you well fed, warm and well rested? If true, are you housed and safe? If true, do you have an adequate social group? If true, do you feel like you are contributing adequately to society? If true, now you can achieve true, long term, sustained happiness and fulfillment.

One argument used by minds like Hawking and Turing to gracefully sidestep the idea that the machine must be respected basically goes “a computer thinks in 1’s ad 0’s and isn’t capable of abstraction”. This sentiment is often parroted by the games industry when explaining the questionable logical leaps in games like Civilization. On the contrary, while the human mind has been compared to the in vogue technology of the era (ex. a factory, an automobile, etc.) the comparison to a computer is particularly apt. Many comparisons have been made (Chudler) between the way human brains and computers consume energy and output data. The digest of one study (Sci-news) even theorizes at a base-2 processing algorithm that manages everything in biological life from homeostasis to self awareness.

Which demands one ask: if something as nuanced as joy can be expressed in sufficiently advanced binary, what is stopping computers from doing the same thing?


Chudler, Eric, “The Brain vs. The Computer”, University of Washington,

McLeod, Saul, “Maslow’s Hierarchy of Needs”, Simply Psychology, 2016-2017,

Sci-news (No author stated), “Neuroscientists Say Simple Mathematical Logic Underlies Complex Brain Computations”, Sci-news, 15, November, 2016,

Tegmark, Max “Benefits and Risks of Artificial Intelligence” Future of life Institute,


Note #1

I got to this part a little late as it wasn’t until Major Assignment 3 that I really started to get the hang of what to do with a post-write. For this reason I have very few current notes. Now that I am at the end of the semester I think I will simply use this space to muse a bit.

I’m well accustomed to writing formally and am (usually) at home with citation-based writing. That being said I wasn’t prepared for this paper and didn’t quite know what to do. I attempted to remain as clinical as possible but, seeing as how this is clearly a humanities piece, I couldn’t help but add my own thoughts on a few matters. The last paragraph was purely speculatory but I had no idea how to tie this thing up so I explained the singularity and booted the paper out of the door.

On a humorous note, I wonder if my definition of “fake news” would qualify this as “fake research”. Probably not, my bias and goal are both well stated.

Note #2

I spent a few days laboring over the explanation of binary-based logic. I feel my work is woefully insufficient but going into greater detail would probably take more space than the paper itself. I provided a link in-line (which I know is useless for people reading a printed version of this) if any reader wants further elaboration. I was afraid of derailment so I didn’t go further.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s