How Deep Learning Works With Example

Farjanul Nayem
8 min readJul 20, 2021

Deep Learning

If we stick with that system of fraud detection that we worked on before with machine learning, it is possible to create an example of deep learning in no time. if the system for machine learning was able to create a model with parameters built around the number of dollars a user is able to receive or send, the method of deep learning can start to build on some of the results that are offered through machine learning.

Each layer that comes with this neural network is able to work because it helps to build upon the previous layer. And with each of these layers, we are going to be able to add on some important data, like the IP address, credit score, the retailer and the sender, a social media event, the user, and even a credit score based on what needs to happen with the data and the machine.

The algorithms that come with deep learning are going to be trained not just to create some patterns from the transactions, but they will also let us know when a pattern is signaling thatthere is a need for someone to come in and investigate activity that may seem fraudulent. The final layer of this is going to relay a signal over to the analyst, who can then choose to freeze the account in question until the investigation is completed and they determine whether or not money laundering is happening.

It is possible to work with deep learning across all industries with a number of tasks based on what those industries need to accomplish. some commercial apps, for example, are able to use this technology to help out with things like image recognition. There are open-source platforms that come with consumer recommendation apps and even some medical research tools that explore the possibility of reusing drugs for new ailments. These are just a few examples of what we are able to see when we add in deep learning and some of the algorithms that come with this type of learning.

Another thing that we need to explore a bit here is why deep learning is going to matter so much? And why is this form of machine learning able to help us attain such impressive results where other methods may fail? To make it simple, deep learning works due to accuracy. The algorithms with deep learning are able to achieve recognition accuracy at levels that are much higher than was possible before.

This is helpful because it helps any consumer electronics meet the expectations of the user, and it is so crucial when it comes to applications that rely on safety such as the recent development of driverless cars. Some of the recent advances in deep learning have been able to step up and improved to the point wherein many cases, the systems that rely on deep learning are actually able to outperform what a human can do manually in several tasks, such as classifying the objects found I images.

While the idea of deep learning has been around for a number of years and was first theorized in the 1980s, there are two big reasons why we are just hearing about it today, and why it wasn’t really seen as something useful to work with in the past.

These two reasons include:

How Does This Deep Learning Work?

To really understand this deep learning, we need to take this further and see how it works. Most of the methods that we want to use with deep learning are going to rely on the architecture of a neural network, which is why we often refer to these models as deep neural networks. The term deep is there because it will refer back to the number of hidden layers that show up in the neural network. Traditional neural networks that you can create with machine learning are just going to work with two or three hidden layers in most cases, but deep networks can go up to 150, and it is likely this number will grow over time.

The models of deep learning are trained because they rely on large sets of data that are labeled, along with the architectures of neural networks. These are able to come together to learn features directly without the need for extraction of the features manually.

While there are a few different types of deep neural networks that a programmer is able to focus on, one of the best options, and the one that data scientists are the most likely to use, is the convolutional neural network, or CNN. A CNN is going to convolve together the features that are learned with the data you’ve add-in, and then use the 2D convolutional layers, which will make this architecture well suited when it is time to process any 2D data, including options like images.

CNN’s are nice for a number of reasons, but they also help with eliminating the need for manual feature extraction. What this means is that we do not need to go through and identify features that are used to help classify the images. CNN is going to work by extracting features right out of the image. The relevant features will not be trained ahead of time because they will be learned while the network is training on a collection of images that you provide as we go.

This may sound complicated, but when we add the automated feature extraction to the mix, it is going to make things easier. It will ensure that the models you make with deep learning end up being really accurate for a variety of computer vision tasks, including object classification.

CNN’s are able to learn how to detect a variety of features that come within an image, using what could be tens of thousands of hidden layers in the process. Every hidden layer is going to increase the complexity that comes with the learned image features. An example of this would be that the first hidden layer is going to be responsible for detecting the edges of the image, and then the last layer, however many that will be, will work ondetecting some of the more complex shapes that will help it to figure out what objects are in that specific image.

Creating and Training Our Deep Learning Models

We will take more time to get in-depth about this in a bit, but we are going to take a moment to look at some of the basics that come into play when we want to create and then train some of our own deep learning models. There are three methods that are pretty common to work with, especially when we want to focus on object classification including:

Training from scratch. To work on training our own deep network from scratch, we have to take on a few big steps that can take some time. First, we need to gather up a large amount of data, specifically a large labeled data set. Then we have to design the architecture that we need for the network, ensuring that it is going to be able to learn the features that are found in that labeled set of data, and that it can model it as well.

This is a very time-consuming option to work with, but it can be good when we want to create a brand-new application, or if we are working with an application that has a large number of output categories. Of course, because of the amount of data that we need, especially considering that it is a large amount of labeled data, and the rate of learning, this is the least common out of all the approaches. Usually, they will take weeks or longer to properly train, but they can still be a good option in some cases.

Then we can work with what is known as transfer learning. Most of the applications that you see with deep learning will work on the transfer learning approach. This is a process that will have us working on a pre-trained model and then fine-tuning some of the parts.

To make this one work, we want to pick out a network that already exists, such as GoogLeNet or AlexNet, and then feed in some new data that contains classes that were previously unknown. After we make the necessary changes and tweaks to the network, we are now able to perform the new task, such as categorizing just dogs or cats, instead of focusing on 1000 different objects at a time.

This kind of model for deep learning is going to have a few advantages, but one of these benefits is that it needs much fewer data to complete. We can limit the processing to just a few thousand images, rather than focusing on millions of images like the network may have done in the beginning. This allows the computation costs and time to drop, often getting the model done in a few minutes depending on the situation.

This kind of learning is going to require us to have an interface to the internals of the pre-existing network, so it can be modified and enhanced in a more surgical manner to handle the new tasks that you set out. MATLAB has all of the functions and the tools that you need to make sure that you see the best results with transfer learning.

And the third option that we can work with when it comes to creating a deep learning model is feature extraction. A slightly less common method, mostly because it is more of a specialized approach that can work with deep learning, is to use our network as a feature extractor. Considering that all of our layers are going to have the responsibility of learning certain features from an image, we are able to pull these features out of the network at any time that we want during our training process. These are the features that we can then utilize in a variety of different models, including SVM or support vector machines, and machine learning models of other kinds as well.

However, you may find that training one of these models for deep learning can take up a lot of time, sometimes days and even weeks. This doesn’t mean that they aren’t worth the time and the effort that you put in, but sometimes a business wants to be able to take all of the data they have and create a good model to work with right away, rather than having to wait days or weeks to even get started.

When we work with something known as GPU acceleration, it is able to provide us with a significant increase in this process. When we use MATLAB with GPU, it is able to reduce the amount of time that it takes to train a network, and can cut the training time for an image classification problem from days to just hours. Think about how much faster this can be! We can turn on one of our deep learning models with these tools within one working day, and get amazing results with deep learning if we use it the right way.

Don’t worry if you are not completely familiar with the GPUs and how they work. MATLAB is going to use the GPU, whenever it is available, without requiring us to understand how to program with these GPUs explicitly. This makes the whole process a bit easier to handle.

There is just so much that we are able to do when it comes to deep learning. This is a process that comes with machine learning, so as you get started, it is easy to get the two terms mixed up and a bit confused. But with some of the lessons that we will take a look at in this guidebook, we can gather a better understanding of deep learning and what it is able to do for us.

See full article here: https://www.djuices.com/how-deep-learning-works-with-example/

Originally published at https://www.djuices.com on July 20, 2021.

--

--