My Quest for Artificial Intelligence

This post logs my ongoing quest to create artificial general intelligence (AGI). I will continually update this post as I find and explore more interesting ideas for creating AGI.



Thoughts as of Dec 25, 2018

I’ve decided to log my journey on becoming an AI researcher in this post.

I started this journey thinking that I’d go through my timeline of “tick-tock”ing computer science and neuroscience textbooks and posting notes to CR4-DL. However, after going through two textbooks, I came across the book “Mastery” that my sister left behind. Mastery has convinced me to become an expert in science, a researcher, such as the likes of Einstein and Turing but for AI. It was also around this time that I knew I wasn’t retaining and learning as much as I wanted from what I was reading so I searched for ways of learning to learn. I came across the book “Make It Stick” and it sold me on its conclusion of learning by distributed retrieval. After I read some excellent science fiction books (The Three Body Problem series), I picked up the book “Peak”.

After reading Peak, I think I’m done with this learning to learn and mastery/expertise path and I’m ready to start tackling actual AI research. There are a few other skills that I want to learn (cooking, writing) but the meta-mastery is done. I’ll always be reminded of Wade from “Death’s End” in that we must keep advancing, stopping at nothing. The end goal is artificial general intelligence and nothing less.

My current plan is to devise a path to AGI. I’ve brainstormed a few, one is to follow the evolution path and hope that simulating evolution creates AI, but it has a low chance of success with a high time investment. Another path is to follow the human developmental path of starting as a child and working our way up. It has a higher chance of success but is more difficult than the evolutionary path.


Thoughts as of March 23, 2019

I’ve edited my last journal entry for clarity (and will continue editing all entries in the future). Man was it bad. Anyways, I just completed the cognitive science textbook a week ago and I’m working on editing the notes to be more presentable. A few health problems have come up and if I die, I hope someone will take up this mantle of building AI with the neuroscience and cognitive science approach. This is because I’ve realized something about problems in general; that they have weak points. And I believe that the weak point of AI is our brain because it’s the only known entity to exhibit intelligent behavior.

I’m still working out the details of how to create AI but one item that I’ve made progress on is that we’ll need some measure of how close we are to AGI. As the saying goes, “If you can’t measure it, you can’t improve it.” Whether that be the complexity of the game it can play, how many jobs it can replace, or how close it is to passing the Turing test, we need some measure. I don’t know what measure is relevant or best, but since we’re aiming for AI, a measure of intelligence is needed.


Thoughts as of May 9, 2019

A lot of interesting things have happened since my last entry. I’ll start with the biggest one.

I recently came across the idea of “neuromorphic computing” which is computation that’s based on the human brain. I first encountered it in the book “Artificial Intelligence: Perspectives from Leading Practitioners” where Dharmendra Modha (from IBM) talks about it. The main principles of a neuromorphic chip are

  • non-Von Neumann architecture aka there’s no separation between processing and memory.
  • Event-based not clock-based like our current computers.
  • Extremely power efficient as it doesn’t waste energy updating clock cycles.
  • Massively parallel.

I believe this is the missing piece in the quest for AI. I’ve always had some reservations and skepticism about using digital computers as the platform for AI since the computing principles are so different when compared to the brain. So, when I learned about neuromorphic chips, I was extremely excited and hopeful that I found (what I believe to be) the hardware piece of AI. In conjunction with the hardware, we’ll also need new software to run on these neuromoprhic chips and for that, we turn to cognitive psychology.

I’m currently taking a psychology course called “cognitive psychology” and I’m really enjoying it. As I’ve learned more about the brain, I believe more and more that AI will have to be a combination of specialized components like the brain. I haven’t gotten to the book on this, but the brain seems to be a “kludge” where different parts are specialized for different functions. For example, we have a specific region for face recognition and if it’s destroyed, we lose the ability to recognize faces (a condition called prosopagnosia).

Another idea that’s been marinating in my mind is that we need a theory of cognition/mind. The theory would explain how behavior arises from neural activity. An example would be that the release of the prolactin neurotransmitter explains why men can’t continually orgasm. See this link for more details as it’s a clear and concise explanation of why a behavior arises due to neural and chemical properties.

Speaking of how neural activity translates to behavior, I think the mind-body problem in philosophy is ridiculous and a irrelevant to the study of the brain and mind. The argument is based on the assumption that the mind and body is separate. However, this isn’t true as changes that I do to my body affect my mind and vice versa. For example, I can take drugs and that has an effect on my mind. I also use my mind to control my body. This suggests that there’s a link between my mind and body hence they’re not separate.


Thoughts as of January 16, 2020

I haven’t updated this journal in a while due to school and laziness but that doesn’t mean nothing interesting has come up. Here are a few new developments.

The first development is that I now believe that consciousness and intelligence are interlinked and that AI cannot be achieved without first building consciousness. My reasoning comes from two pieces of evidence. The first is from the book “The Feeling of What Happens” which details an evidence-based theory of consciousness. To summarize, the book hypothesizes that consciousness is due to the brain applying feeling to the act of feeling. This means that the brain does a “map of a map” representation of itself and that is how it achieves consciousness. Furthermore, the book hypothesizes that core consciousness is located in the brain stem because of various clinical cases and conditions. Further details can be found here but the main point is that with the absence of consciousness, no intelligent behavior occurs (and barely any behavior at all).

This leads me into my second piece of evidence in the form of a question. Have we ever seen an intelligent person without consciousness or vice versa? I don’t think we have because the two are intimately linked. You can try it out yourself by trying to reason or plan without the use of your internal voice or internal eye. I’m not able to do anything because there is no guidance, no director. While the two may be separate given their definitions, it’s hard to say that intelligence can exist without consciousness because we have not found examples of it occurring.

The second development is a reoccurring theme that I’ve noticed across various brain theories. The brain theories that I’ve looked into are Chris Eliasmith’s Neural Engineering Framework, Jeff Hawkins’ Thousand Brains Theory of Intelligence, and the After Digital textbook. The reoccurring themes are

  • The notion of representations
  • The notion of information
  • Using dimensions as features of the data
  • The sparsity of representations
  • The account of time

While I don’t agree with any of the three theories/frameworks, they all seem to be getting at some common features of the brain that I think will be useful in building AI. It’ll take more time and work to tease out the commonalities but the reoccurrence of such themes is probably hinting at something important.

Anyways, the third development is that I’ve been applying to masters programs in Canada, Specifically, I’ve applied to the following programs

  • Masters in Systems Design Engineering at University of Waterloo
  • Masters in Neuroscience at McGill University
  • Masters in Computer Science at McGill University
  • Masters in Neuroscience at Western University

I’ve been applying because I need more time to self-study the brain and because I want to become a professional scientist/researcher. While this isn’t directly related to AI, this education is what I need to push myself to learn more and to grow up and be independent.


Thoughts as of August 26, 2020

The world has changed a lot in the past couple of months. COVID-19 changed the way a lot of people live but my life is pretty much the same. In quarantine, I’ve been using my free time to read a lot of textbooks and books. Just in the past five months, I’ve read six textbooks and four books; quite a lot more than I have before. Even with all of this time spent reading, I still feel as though I haven’t devoted enough time to AGI and the brain. I still spend a lot of time watching random YouTube videos or playing video games, and not enough time performing deliberate practice, making and using Anki flash cards, or just thinking about new ideas. I plan to manage my time more efficiently and effectively when my masters program starts.

I was accepted into the MSc in Neuroscience program at Western University where I’ll be applying data science to brain data. The program starts remotely but I hope to move closer to the university campus in the future. Aside from this news, my journey to creating AGI has gotten complicated. I’m lost at a crossroads that I don’t know how to navigate. Over the past few months, and as I’ve learned more about the brain, I don’t know which problem I should focus my time on because they all interest me. My current paths are

  • Building neuroscience-inspired machine learning algorithms. By going down this path, I end up working for a company like DeepMind/OpenAI, publishing papers on new learning algorithms inspired by the brain, and become another no-name scientist.
  • Working towards causality-based AI. By going down this path, I work on applying the new science of causality from the work of Judea Pearl to AGI. This path has more potential than the neuro-inspired ML one but not more when compared to the next two paths.
  • Starting a neuromorphic company. By going down this path, I create the substrate that I believe AGI will be built on and apply my engineering skills to recreating the hardware of the brain.
  • Working on a theory of consciousness. By going down this path, I forsake AGI and focus solely on explaining consciousness in the brain. This path has the greatest potential but I forgo my desire to build AGI.

Each path has it pros and cons but all have some connection to the brain and AGI. I have to choose a path because I need to focus my time on a problem and become an expert in it. Unfortunately, I don’t have enough lifetimes to explore all paths so I must pick one. Right now, I’m leaning towards the consciousness and neuromorphic paths as I don’t want to work on neuro-inspired ML nor causality-inspired AI. While I do believe that these problems are important, I don’t see myself enjoying the work as much nor do I believe that I can make significant contributions to these problems.

The neuromorphic company path is the most novel and exciting but it comes at a high risk. I know almost nothing about starting a company and electrical engineering, so it will be extremely difficult. But I believe that neuromorphic chips are the future and I’ve learned a lot about the design properties of neurons. The business space isn’t very populated so it wouldn’t be difficult to make a name for myself. It’s hard to say whether this is the right path for me but I can say that it will be an exciting one.

The consciousness path is the path that I have the most confidence in. Ever since I was a child, I’ve always been interested in science and consciousness is one of the rare fields in my time where much is seen but not much is known. That is to say, the field has potential and I can bring out that potential. The current state of the consciousness isn’t as far fetched as say light-speed engines nor is it as well explored as computer science. I can see myself making big contributions to the field and it wouldn’t be as hard as the neuromorphic path.

I’m forcing myself to decide because I want become a significant person in whatever field I choose. I want to be well known for my work and for my contributions to humanity. Regardless of the path, I will devote all of my time and energy to that path to ensure that we make progress. The road ahead is paved with squirrels that couldn’t decide whether to cross or go back.


Thoughts as of January 10, 2021

Well, I’ve been thinking about the two paths over the past six months and I’ve decided to do both! Yes, I know I said that I would decide but I can’t. It’s like picking between which of your children is your favorite (not that I have children). I can’t see myself letting go of both opportunities to make a difference in the world, as I would forever regret not having picked the other path. However, what this means is that I’ll need to work twice as hard (which is fine by me) to achieve the level of success that I desire in both fields. And in a way, by picking these two paths, I give up on my desire to build AI.

I’ve been thinking and feeling this recently, but my desire to build AGI has been (and is) going down. In part because the field is so computer-science focused, but also because I just don’t agree with what a lot of the current AI researchers believe. For example, I don’t think that classical computers are enough to implement AI and I don’t think that deep learning (or it’s future variants) will get us to the summit. Also, in exploring AI, I’ve grown to love the brain. I really enjoy learning about the brain and I lose track of time when I’m reading a really good brain theory or idea. For instance, I was recently reading the popular Thomas Nagel paper on “What is it like to be a bat” and I was really immersed in his arguments and ideas. That doesn’t really happen when I read AI ideas as whenever I see a mention of computers or deep learning, I get bored and disagree.

Another turnoff to AI is that the ideas are loosely (or wrongly) based on simplified neuroscience ideas. And I get irritated when I see an neuroscience idea being butchered by an AI researcher. As someone who believes that we know more about the brain than we believe, I’m disappointed when people say “we don’t know anything about the brain” because this disregards the 200 years of work that people have put into studying the brain. Heck, we have thousand-page textbooks on the brain and they still say that! Regardless, the current field of AI is putting me off from working on AI, but also my belief that we don’t have the right platform for it yet. I believe the right platform is neuromorphic computing.

Also over the past few months, I’ve been thinking about starting a neuromorphic company. I can’t die knowing that I didn’t try, so try I will. The first step has been to work out the idea before any business takes place. The biggest issue is that I don’t know the right substrate/material for properly implementing a man-made neuron. What material is both able to reorganize itself after damage, change not only it’s connection strength but also what it’s connected to, and can communicate spikes efficiently? I don’t know. But maybe I’m wrong, maybe we don’t need to copy every single biological detail as we will find some other way to implement the core features of a neuron. I’m still in the early stages of working out the technical details but I believe it will happen, because it must happen.

Aside from neuromorphic computing, I read a textbook called “The Neurology of Consciousness” and found a gem of an idea in it (even though most of the textbook was ok/bad). The idea is that suppose there are three areas in the brain devoted to, say, vision. One area takes in visual information from the eye, one generates visual information (implementing the ability to imagine), and one compares the inputs and generates our experience of vision. We’re only conscious of the last area as it generates our experience, but damage to any area affects vision. There are cases where patients show damage to one area that doesn’t affect the other. For example, some patients can’t imagine colors but can perceive and understand colors, while other patients can’t imagine or perceive colors, but still understand them. However, the interesting case is when a patient can’t understand colors. This would mean damage to the last area and is similar to species that don’t have that color in their visual spectrum. This is getting a bit long so you can read my textbook notes to learn more, but this idea makes so much sense to me. It also finally places an idea of where consciousness is and explains agnosia (unawareness of a deficit).

While this idea is exciting, it’s only a small piece in the bigger picture of the brain. I’ve been thinking about ways to organize my knowledge of the brain into better categories and may write a textbook on it, but that’s an idea for the another time.