Posts

The Fractals, Infinity, Universe and Measurement Error

Image
Infinite division of universe and connection between fractals. The universe is a fascinating and mysterious place, full of wonders and mysteries that we are only beginning to unravel. One of the most intriguing questions that scientists have been trying to answer is whether the universe is infinite or finite and whether it has a simple or complex structure. One way to approach this question is to use the concept of fractals, which are geometric shapes that repeat themselves at different scales, creating patterns that look similar but not identical. Fractals are found in nature, such as in snowflakes, ferns, coastlines, and clouds, but they can also be generated mathematically by applying simple rules repeatedly. Some cosmologists have proposed that the universe itself is a fractal, meaning that it has a self-similar structure at different scales. For example, galaxies are composed of stars, which are composed of planets, which are composed of atoms, which are composed of subatomic part...

Create Classification model using Tensorflow

 Here also we need to follow typical steps in the training of a neural network Define the network Here we define the deep neural network as having an input layer that takes 784 input using the functional API. Then there are two dense layers each having 64 neurons each. The Output layer is also a dense layer that gives 10 output probabilities relevant to each class.    The activation function for each class is a softmax function.  This model consists of 138 trainable parameters (64+64+10) # Here we define the model  # Here we use keras functional API note the passing of inputs def   base_model () :   inputs = tf.keras.Input ( shape= ( 784 ,),  name= "fashions" )   # 28 x 28   x = tf.keras.layers.Dense ( 64 , activation= 'relu' , name= 'dense_1' )( inputs )   x = tf.keras.layers.Dense ( 64 , activ...

Let's Create a Perceptron Using Tensorflow

Image
 There are 5 main steps in training a neural network which is even common for the atomic unit of its perception. Define the network: This could be a lot of layers of neurons. For this case just a perceptron. Prepare the training data: Here we will generate a zero-dimensional tensor with 1000 random values. Define loss and optimizer functions. Optimize use in the training phase to measure and minimize the loss. Train the model with training data to minimize the loss. This is the step we find for what values of weight and bias the training data will fit the model (Here just like y = mx + c ) Validate the model The perception consists of two trainable values namely Weight (W) and Bias (b).  As the loss function, we here going to use the Mean squared error loss. 01 Lets define the Perceptron class # Define the model  class   Perceptron () :    def   __init__ ( self ) :      # initializing the trainable...

Occam's Razor

Image
 This article is a direct extraction of other internet sources. " Occam’s razor , also spelled  Ockham’s razor , also called  law of economy  or  law of parsimony , principle stated by the  Scholastic  philosopher  William of Ockham  (1285–1347/49) that  pluralitas non est ponenda sine necessitate , “plurality should not be posited without necessity.” The principle gives  precedence  to simplicity: of two competing theories, the simpler  explanation  of an entity is to be preferred. The principle is also expressed as “Entities are not to be multiplied beyond necessity. - Britannica"   In this video, the narrator explains about use cases of Occam's Razor in Machine Learning.

Intro to Deep Learning and Tensorflow Basics

Image
First of all, let's look at what is Machine learning? ML is simply converting physical representations/data into numbers and finding patterns in them. Deep learning is a subset of ML. In this article, I'll use ML and Deep learning interchangeably. To find patterns in the numbers computers use algorithms that are based on probabilistic methods. In conventional programming, we feed the computer with a set of inputs and rules to follow to get the desired output. But in ML we feed the set of inputs and desired outputs to generate the rules. These rules are figured out by an algorithm and used to deal with the unseen inputs to generate intended outputs which are used to generate the rule in the first place. If you can build a simple rule-based system that doesn't required machine learning, do that  - first rule of Google's Machine Learning Handbook It is advisable not to overuse Machine Learning when a rule-based system can fulfil the same functionality. So when to use Deep ...