Generalisation is the end goal for machine learning
Overfitting is the problem where your alogorith fits extremely well with the training set, but cannot generalise well to new data
Dropout
Emerged relatively recently, but works relatively well https://classroom.udacity.com/courses/ud730/lessons/6379031992/concepts/63923585860923
Drop out works by randomly zero-ing out half of your activations
It's one of the most important techniques that has emerged in recent years. If drop out isn't working for you, you probably need a bigger next work.