If I have to describe latent space in one sentence, it simply means a representation of compressed data.
Imagine a large dataset of handwritten digits (0–9) like the one shown above. Handwritten images of the same number (i.e. images that are 3’s) are the most similar to each other compared to other images of different numbers (i.e. 3s vs. 7s). But can we train an algorithm to recognize these similarities? How?
If you have trained a model to classify digits, then you have also trained the model to learn the ‘structural similarities’ between images. …
Zero-shot learning allows a model to recognize what it hasn’t seen before.
Imagine you’re tasked with designing the latest and greatest machine learning model that can classify all animals. Yes, all animals.
Using your machine learning knowledge, you immediately understand that we need a labeled dataset with at least one example for every single animal. There’s 1,899,587 described species in the world, so you’re gonna need a dataset with roughly 2 million different classes.
Contrastive learning is a machine learning technique used to learn the general features of a dataset without labels by teaching the model which data points are similar or different.
Let’s begin with a simplistic example. Imagine that you are a newborn baby that is trying to make sense of the world. At home, let’s assume you have two cats and one dog.
Even though no one tells you that they are ‘cats’ and ‘dogs’, you may still realize that the two cats look similar compared to the dog.
What’s a cost function, optimization, a model, or an algorithm? The esoteric nuances of machine learning algorithms and terminology can easily overwhelm the machine learning novice.
As I was reading the Deep Learning book by Yoshua Bengio, Aaron Courville, and Ian Goodfellow, I was ecstatic when I reached the section that explained the common “recipe” that almost all machine learning algorithms share — a dataset, a cost function, an optimization procedure, and a model.
In this article, I summarize each universal ‘ingredient’ of machine learning algorithms by dissecting them into their simplest components.
With these ‘ingredients’ in mind, you no…
A frequent question I get from students interested in learning AI:
“Hey Ekin, what resources would you recommend for getting started in AI and ML?”
That’s a great question. Often, it’s easy to get lost in the ocean of resources available on the shelf and on the internet.
For all those curious beginners who are just looking for a place to start, here are the 5 essential resources that were instrumental in my own journey from zero to proficient (not a hero yet, but striving to get better everyday)!
by Michael Nielson
Semantic segmentation. My absolute favorite task. I would make a deep learning model, have it all nice and trained… but wait. How do I know my model is performing well? In other words, what are the most common metrics for semantic segmentation? Here’s a clear cut guide to the essential metrics that you need to know to ensure your model performs well. I have also included Keras implementations below.
If you want to learn more about Semantic Segmentation with Deep Learning, check out this Medium article by George Seif.
CS @ Stanford University | Stanford ML Group