I’ve always been interested with Artificial Intelligence, particularly how robots may be designed to think like people and emulate their activities, as well as data-driven technologies such as Machine Learning and Deep Learning.

In this post, I will share my 4 years of Artificial Intelligence journey.

So, in my freshman year of 2018, I made the first step on my path to AI.


I began by learning Python on Coursera with a specialization- “Python for Everybody” by the University of Michigan, in which Dr. Charles Severance introduces fundamental programming concepts (such as strings, lists, tuples, dictionaries, sets, loops, functions, pattern matching with regular expressions, object-oriented programming, and so on) and advanced concepts (such as accessing Web data, web scraping, parsing SQL, JSON, XML data formats; using Database At the end of this semester, I completed a Capstone Project that helped to solidify my notions by putting all of the above abilities to use. As a newbie, I also read Al Sweigart’s book “Automate the Boring Stuff with Python.”

I also took a number of additional Python crash courses on Coursera and learnt through the websites w3school and geeksforgeeks. Furthermore, coding sites such as HackerRank helped me improve my problem-solving and logical skills.


After that, I realised I needed to brush up on the fundamentals of mathematics, which I had studied in school but was presented in a different context, not very intuitively, and how it is applied in computer science. So I registered in Imperial College London’s “Mathematics for Machine Learning” Specialization on Coursera (which includes three courses: Linear Algebra, Multivariate Calculus, and Principal Components Analysis (PCA)). This specialty bridges the gap by providing an intuitive knowledge of Machine Learning and Data Science. I used edX for Probability and Statistics.

At the end of these classes, I had acquired the necessary Python programming and mathematics expertise to continue my trip and progress to more complex Machine Learning courses. As Einstein put it,

“An opportunity exists in the middle of a crisis.”

So, as an opportunist, I took full advantage of the Lockdown caused by COVID-19, and I am pleased to report that the last four months have been quite beneficial to me. I was able to devote a whole day to learning, and after months of struggling and hard work, I had progressed so far on my road towards AI.


After getting sufficient understanding of Python and Mathematics, I registered on Coursera in Stanford University’s “Machine Learning” course taught by Prof. Andrew Ng.

This is an 11-week course that provides in-depth understanding of several ideas with excellent clarity. It provides an overview to machine learning, data mining, and statistical pattern identification. Supervised Learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks), Unsupervised Learning (clustering, dimensionality reduction, recommender systems, deep learning), Anomaly Detection, Data Synthesis, Photo OCR (Optical Character Recognition), Ceiling Analysis, and other ML Algorithms were taught to me.

I also learnt how to create an efficient model, optimize systems based on their requirements, and increase performance. There were various case studies that taught me how to use learning algorithms to create smart robots (perception, control), text comprehension (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other applications.

I am really grateful to Prof. Andrew Ng and Coursera for this chance.


I needed to be able to apply Machine Learning ideas using Python to real-world challenges after finishing these classes. To do this, I enrolled in the University of Michigan’s Coursera specialty “Applied Data Science with Python.” This specialization (which consists of five courses: Introduction to Data Science in Python; Applied Plotting, Charting, and Data Representation in Python; Applied Machine Learning in Python; Applied Text Mining in Python; and Applied Social Network Analysis in Python) has assisted me in understanding and applying various techniques such as statistical, machine learning, information visualization, text analysis, and social network analysis using popular Python toolkits such as pandas and matplotlib.

This certification has a heavy emphasis on applied learning, and I had the opportunity to work with a variety of real-world datasets. I analyse and pull insights from data using different statistical approaches such as data extraction, pre-processing, data analysis, data wrangling, model construction, model assessment, model refinement, and data visualization.

You may also get my course slides, quizzes, lecture notebook, and assignments on GitHub: Applied-Data-Science-with-Python-GitHub

Along with this, I recommended Harrison Kinsley’s YouTube channel “Sentdex,” where I studied Matplotlib Tutorial Series, Data Analysis with Python 3 and Pandas, NLTK with Python 3 for Natural Language Processing, and Machine Learning with Python.

HANDS-ON (BEGINNER): Using the skills listed above, I worked on a variety of Machine Learning and NLP projects, including Iris Species Classification, Breast Cancer Classification, Boston Housing Price Prediction, Stock Price Prediction, Loan Prediction, Titanic Survival Classification, Wine Quality Prediction, Air Pollution During Lockdown, Twitter Sentiment Analysis, Fake News Detection, and Face Recognition with PCA.


“Artificial intelligence is the new electricity,” says Andrew Ng.

Inspired by the well-stated quote and my strong interest in Artificial Intelligence, I registered in another Coursera specialty called “Deep Learning” by deeplearning.ai. This 5-course specialty is superbly intended to provide you with essential competences in Deep Neural Networks, CNNs, RNNs, NLP, and Sequence Models.

  • Deep Learning and Neural Networks: how to create, train, and use fully connected deep neural networks; how to implement efficient (vectorized) neural networks; important parameters in neural network architecture.
  • Initialization, L2, dropout regularization, Batch normalization, gradient checking; a range of optimization techniques, such as mini-batch gradient descent, Momentum, RMSprop, and Adam, and their convergence; TensorFlow Structuring ML Neural Network Implementation Projects include: diagnosing mistakes in an ML system; prioritizing the most promising error-reduction paths; end-to-end learning, transfer learning, and multi-task learning.
  • Convolutional Neural Networks: construct a CNN using current variants such as residual networks; apply CNNs to visual detection and identification tasks; make art using neural style transfer.
  • Build and train RNNs and widely used versions such as GRUs and LSTMs; text synthesis; audio applications such as voice recognition and music synthesis

In this specialty, I also had the opportunity to participate in several Hands-On activities, such as building a recurrent neural network step by step. Character-Level Language Modeling in Dinosaur Island; Jazz Improvisation with LSTM; Operations on word vectors – Debiasing; Emojify; Neural Machine Translation with Attention; Trigger Word Detection

This trip has been quite instructive and has undoubtedly opened many avenues for me, from constructing a simple cat detector to identifying things and persons; from synthesizing music to detecting words and translating them to another language.

Aurelien Geron’s “Hands-On Machine Learning with Scikit-Learn and TensorFlow” was another book I read.


I needed to understand the TensorFlow framework after Deep Learning, therefore I registered in the “DeepLearning.AI TensorFlow Developer Professional” specialty, which contains four courses:

  • TensorFlow for AI, ML, and DL Introduction
  • TensorFlow Convolutional Neural Network
  • TensorFlow Natural Language Processing
  • Sequence, Time Series, and Forecasting

I gained practical experience with 16 Python programming projects in which I employed machine learning abilities using TensorFlow to design and train strong models and investigate ways to prevent overfitting, such as augmentation and dropout.

I learnt how to:

  • Improve network performance by training it to recognize real-world photos using convolutions.
  • Use natural language processing systems to teach robots to interpret, analyse, and respond to human speech.
  • Transfer learning and the extraction of learnt features from models
  • Create NLP systems that handle text, including tokenizing and vectorizing phrases so they may be fed into a neural network.
  • Discover how to use RNNs, GRUs, and LSTMs in TensorFlow, as well as how to train an LSTM on existing text to produce poetry.
  • Discover how to design time series models, prepare time series data, and utilize RNNs and 1D ConvNets for prediction.

Finally, using real-world data, I created a Sunspot Prediction Model. I strongly suggest this course to anyone who feels lost after completing the Deep Learning Specialization.

Furthermore, the world is always changing, and I am working on Kaggle to use and update these challenging abilities in the AI-Machine learning field to ensure my skills maintain pace and to obtain practical experience.