Skip to main content

Variance and Bias!

Ever wondered what bias and variance are and how they affect our Machine Learning models? I was on the lookout for the basic definition of Bias and Variance in the ML language. I stumbled upon a site that had a beautiful explanation, and this is what I learned about it.

Bias: Bias is the error that occurs when the model fails to meet the expectations. Say, I have trained a model, and while testing it, I need to measure the accuracy. While doing so, the prediction and testing data are used. When it predicts with an accuracy of 96%, the remaining 4% would be the bias error, error of bias, or simply bias. In order to decrease this error, we should probably introduce variance.

Variance: Variance is the spread of data around the mean point. We can see how it acts when the Machine Learning model changes or becomes sensitive to different datasets apart from the trained values or data.

Now, we need to remember that low bias and high variance can result in overfitting of the model, while the reverse could cause underfitting of the model.

In order to get the right form of the model, we need to use low bias and low variance. There is a method in ML to make this happen. It is called the bias-variance tradeoff. To use this method, we need a technique called Bias Variance Decomposition. Now, this can be used for both Regression as well as Classification models.


As an example taken from geeksforgeeks(.)org



Comments

Popular posts from this blog

How to Open Jupyter Lab in your favourite browser other than system default browser in Mac OS: A Step-by-Step Guide

Are you tired of Jupyter Lab opening in your default browser? Would you prefer to use Google Chrome or another browser of your choice? This guide will walk you through the process of configuring Jupyter Lab to open in your preferred browser, with a focus on using Google Chrome. The Challenge   Many tutorials suggest using the command prompt to modify Jupyter's configuration. However, this method often results in zsh errors and permission issues, even when the necessary permissions seem to be in place. This guide offers a more reliable solution that has proven successful for many users.   Step-by-Step Solution   1. Locate the Configuration File - Open Finder and navigate to your user folder (typically named after your username). - Use the keyboard shortcut Command + Shift + . (full stop) to reveal hidden folders. - Look for a hidden folder named .jupyter . - Within this folder, you'll find the jupyter_notebook_config.py file.   2. Edit the Configuration File - Open ...

The Git Life: Your Guide to Seamless Collaboration and Control

A Comprehensive Guide to Git: From Basics to Advanced   What is Git and GitHub?   Imagine you are organizing a wedding —a grand celebration with many family members, friends, and vendors involved. You need a foolproof way to manage tasks, keep track of who is doing what, and ensure that everyone stays on the same page. This is where Git and GitHub come in, though in the world of technology.   What is Git?   Git is like the wedding planner or the master ledger for managing all wedding-related activities. Think of it as a system that helps you:      1.   Keep track of every change made (like noting down who ordered the flowers or printed the invitation cards).       2.   Maintain a record of what changes happened and who made them (e.g., the uncle who updated the guest list).       3.   Go back to an earlier version if something goes wrong (...

Understanding Large Language Models (LLMs): An Intermediate Guide – Part – 2

Intermediate Guide for LLMs At an intermediate level, we will go deeper into the inner workings of Large Language Models (LLMs), their structure, key components involved, how they are trained, and how they can be fine-tuned for specific tasks. Additionally, we will provide more hands-on examples with code, using advanced techniques like transfer learning and transformers. LLMs include GPT-3, BERT, T5, and GPT-2. These all fall within the broader category of the Transformer architecture. This architecture has revolutionized the area of natural language processing, or NLP. These models can deal with large quantities of text data, be context-sensitive, generate coherent responses, and even learn new languages and tasks without much extra training. Core Structure of LLMs: The Transformer Architecture The heart of modern LLMs is the Transformer architecture, introduced by Vaswani et al. in the paper "Attention is All You Need" in 2017. The Transformer model revolutionized ...