Did you ever notice how theories named after people appear complicated?
Mention a mathematician’s or scientist’s name and automatically most people presume, “That’s way too complicated for me.” Hilbert spaces. Pythagorian theorom. Schroedinger’s wave equation. Einstein’s relativity. Alfalfa’s loose tooth.
But here you and I are, ordinary people. And we’re curious. How do brains tell daisies from monkeys? Or fish from dogs? How do brains work? Such a simple question. And such a simple answer. Really? Why yes, of course. How do brains work? Quite well!
But what’s the magic inside the brain? How do you describe the mechanism? Is it spiritual? Is it mechanical? Or chemical? Electronic, maybe? Or some combination?
Well, a bunch of guys and gals with complicated names looked at stuff, and out popped computer programs that did stuff.
Who? What did they do? When did all this happen? What did they discover?
Well, God made a real neuron. He made lots of them. And people made imitations that were, eh, OK, but not that great. But they started to make us believe they had potential to do something valuable. So, rich people began spending money on it. Then politicians began spending lots of our money on it. And amazing things started to happen.
Computers began driving cars. Robots began making cars. Cell phones began reading fingerprints and recognizing objects. Post office sorters began reading handwritten addresses. Programmers began asking, “How did they do that?” Other programmers began to answer, “I don’t know.”
AI and Machine Learning are way more complicated than That
I searched YouTube far and wide for a video on the Perceptron that would be fun, easy, and attention holding, and frankly, I didn’t find any except this one. The others seemed more like graduate courses at the university level, and this is with the very beginning, most elementary of topics.
However, if you already have a background in algebra or better yet linear algebra, this video below by Paolo Ricciuti from The Coding Train is the most simple and enjoyable video I have seen. I love my professors from Stanford and University of Toronto, and they also presented the subject excellently with great rigor, but the courses are probably better for someone pursuing a graduate degree in artificial intelligence.
This one, I hope, will be more fun and easy to understand.
For more to read on the Perceptron
Stanford - Neural Networks gives a short, interesting, easy to understand introduction to the perceptron. Professor Andrew Ng also teaches courses through Coursera on machine learning, though the courses are quite challenging. They were my introduction to Machine Learning, and the only courses I took which were as rigorous if not much more so was a course from Geoffrey Hinton from University of Toronto. It is a course he feels is very much outdated, but I found it deeply valuable.
Towards Data Science gives an awesome introduction and history on the perceptron. The history and development of Artificial Intelligence is an extremely interesting subject in spite of its intensely geeky nature. There is so much to enjoy about it.
Fast.ai offers courses online for free which are a much easier and practical introduction to artificial intelligence. I heartily recommend going through that course and perhaps some easier courses from Udemy and on YouTube before getting into a rigorous course like the ones from Coursera as they will help make those rigorous courses more meaningful and easier to remember. Unfortunately for me, I chose to go for the rigorous courses first. But I still love the challenge.