锐单电子商城 , 一站式电子元器件采购平台!
  • 电话:400-990-0325

深度神经网络回归_深度神经网络

时间:2023-07-20 17:07:00 sick传感器数字量

深度神经网络回归

深度神经网络回归

深度神经网络 (Deep Neural Networks)

A deep neural network (DNN) is an ANN with multiple hidden layers between the input and output layers. Similar to shallow ANNs, DNNs can model complex non-linear relationships.

深度神经网络(DNN)在输入和输出层之间有多个隐藏层ANN。 与浅层ANN相似,DNN建模复杂的非线性关系。

The main purpose of a neural network is to receive a set of inputs, perform progressively complex calculations on them, and give output to solve real world problems like classification. We restrict ourselves to feed forward neural networks.

神经网络的主要目的是接收一组输入,逐步复杂计算,并提供输出来解决分类等实际问题。 我们限制前馈神经网络。

We have an input, an output, and a flow of sequential data in a deep network.

在深度网络中,我们有输入、输出和顺序数据流。

Neural networks are widely used in supervised learning and reinforcement learning problems. These networks are based on a set of layers connected to each other.

神经网络广泛用于监督学习和强化学习问题。 这些网络基于相互连接的一组层。

In deep learning, the number of hidden layers, mostly non-linear, can be large; say about 1000 layers.

在深度学习中,隐藏层的数量(大部分是非线性的)可能很大; 说大约1000层。

DL models produce much better results than normal ML networks.

DL模型比普通模型好ML网络产生更好的效果。

We mostly use the gradient descent method for optimizing the network and minimising the loss function.

主要采用梯度下降法优化网络,最小化损失函数。

We can use the Imagenet, a repository of millions of digital images to classify a dataset into categories like cats and dogs. DL nets are increasingly used for dynamic images apart from static ones and for time series and text analysis.

我们可以用Imagenet (数百万数字图像的存储库)将数据集分类为猫和狗。 DL网络越来越多地用于除静态图像之外的动态图像以及时间序列和文本分析。

Training the data sets forms an important part of Deep Learning models. In addition, Backpropagation is the main algorithm in training DL models.

训练数据集是深度学习模型的重要组成部分。 另外,反向传播是训练DL模型的主要算法。

DL deals with training large neural networks with complex input output transformations.

DL大型神经网络训练具有复杂的输入输出转换。

One example of DL is the mapping of a photo to the name of the person(s) in photo as they do on social networks and describing a picture with a phrase is another recent application of DL.

DL一个例子是将照片映射到照片中的人的名字,正如他们在社交网络上所做的,并用短语描述图片DL另一个最新应用。

Neural networks are functions that have inputs like x1,x2,x3…that are transformed to outputs like z1,z2,z3 and so on in two (shallow networks) or several intermediate operations also called layers (deep networks).

拥有神经网络x1,x2,x在两个(浅网络)或几个中间操作(也称为层)(深网络)中转换为3等输入函数z1,z2,z3等输出。

The weights and biases change from layer to layer. ‘w’ and ‘v’ are the weights or synapses of layers of the neural networks.

权重和偏差会随着层的不同而变化。 “ w”和“ v是神经网络各层的权重或突触。

The best use case of deep learning is the supervised learning problem.Here,we have large set of data inputs with a desired set of outputs.

深度学习的最佳用例是监督学习问题。在这里,我们有大量的数据输入和组输出。

Here we apply back propagation algorithm to get correct output prediction.

在这里,我们使用反向传输算法来获得正确的输出预测。

The most basic data set of deep learning is the MNIST, a dataset of handwritten digits.

深度学习最基本的数据集是MNIST,这是手写数字数据集。

We can train deep a Convolutional Neural Network with Keras to classify images of handwritten digits from this dataset.

我们可以用Keras通过对数据集中的手写数字图像进行分类,写数字图像进行分类。

The firing or activation of a neural net classifier produces a score. For example,to classify patients as sick and healthy,we consider parameters such as height, weight and body temperature, blood pressure etc.

神经网络分类器的触发或激活会产生分数。 例如,为了将患者分类为健康患者,我们考虑身高、体重、体温、血压等参数。

A high score means patient is sick and a low score means he is healthy.

高分表示病人生病,低分表示病人健康。

Each node in output and hidden layers has its own classifiers. The input layer takes inputs and passes on its scores to the next hidden layer for further activation and this goes on till the output is reached.

输出层和隐藏层中的每个节点都有自己的分类器。 输入层接受输入,并将其分数传输到下一个隐藏层进一步激活,直到输出。

This progress from input to output from left to right in the forward direction is called forward propagation.

从输入到输出从左到右在向前方向上的这种进展称为前向传播。

Credit assignment path (CAP) in a neural network is the series of transformations starting from the input to the output. CAPs elaborate probable causal connections between the input and the output.

神经网络中的信用分配路径(CAP)是从输入到输出的一系列转换。 CAP详细说明了输入和输出之间可能的因果关系。

CAP depth for a given feed forward neural network or the CAP depth is the number of hidden layers plus one as the output layer is included. For recurrent neural networks, where a signal may propagate through a layer several times, the CAP depth can be potentially limitless.

给定前馈神经网络的CAP深度或CAP深度是隐藏层的数量加上包含输出层的一层。 对于递归神经网络,其中信号可能会多次传播穿过一层,因此CAP深度可能是无限的。

深网和浅网 (Deep Nets and Shallow Nets)

There is no clear threshold of depth that divides shallow learning from deep learning; but it is mostly agreed that for deep learning which has multiple non-linear layers, CAP must be greater than two.

没有明确的深度阈值将浅层学习与深度学习区分开。 但是大多数人都同意,对于具有多个非线性层的深度学习,CAP必须大于两个。

Basic node in a neural net is a perception mimicking a neuron in a biological neural network. Then we have multi-layered Perception or MLP. Each set of inputs is modified by a set of weights and biases; each edge has a unique weight and each node has a unique bias.

神经网络中的基本节点是模仿生物神经网络中神经元的感知。 然后我们有了多层感知或MLP。 每组输入都通过一组权重和偏差进行修改; 每个边缘都有唯一的权重,每个节点都有唯一的偏差。

The prediction accuracy of a neural net depends on its weights and biases.

神经网络的预测准确性取决于其权重和偏差。

The process of improving the accuracy of neural network is called training. The output from a forward prop net is compared to that value which is known to be correct.

提高神经网络准确性的过程称为训练。 将前向支撑网的输出与已知正确的值进行比较。

The cost function or the loss function is the difference between the generated output and the actual output.

成本函数或损失函数是生成的输出与实际输出之间的差。

The point of training is to make the cost of training as small as possible across millions of training examples.To do this, the network tweaks the weights and biases until the prediction matches the correct output.

训练的重点是使数百万个训练示例中的训练成本尽可能小。为此,网络会调整权重和偏差,直到预测与正确的输出匹配为止。

Once trained well, a neural net has the potential to make an accurate prediction every time.

一旦训练好,神经网络就有可能每次都能做出准确的预测。

When the pattern gets complex and you want your computer to recognise them, you have to go for neural networks.In such complex pattern scenarios, neural network outperformsall other competing algorithms.

当模式变得复杂而您想让计算机识别它们时,您就必须使用神经网络。在这种复杂的模式情况下,神经网络的性能优于所有其他竞争算法。

There are now GPUs that can train them faster than ever before. Deep neural networks are already revolutionizing the field of AI

现在有GPU可以比以往更快地训练它们。 深度神经网络已经在改变AI领域

Computers have proved to be good at performing repetitive calculations and following detailed instructions but have been not so good at recognising complex patterns.

事实证明,计算机擅长执行重复计算和遵循详细的说明,但对识别复杂的模式却不太擅长。

If there is the problem of recognition of simple patterns, a support vector machine (svm) or a logistic regression classifier can do the job well, but as the complexity of patternincreases, there is no way but to go for deep neural networks.

如果存在识别简单模式的问题,则支持向量机(svm)或逻辑回归分类器可以很好地完成工作,但是随着模式复杂性的增加,除了深度神经网络之外别无选择。

Therefore, for complex patterns like a human face, shallow neural networks fail and have no alternative but to go for deep neural networks with more layers. The deep nets are able to do their job by breaking down the complex patterns into simpler ones. For example, human face; adeep net would use edges to detect parts like lips, nose, eyes, ears and so on and then re-combine these together to form a human face

因此,对于像人脸这样的复杂模式,浅层神经网络会失败,并且别无选择,只能使用具有更多层的深层神经网络。 深层网络可以通过将复杂的模式分解为更简单的模式来完成其工作。 例如,人脸; adeep net将使用边缘检测嘴唇,鼻子,眼睛,耳朵等部分,然后将它们重新组合在一起以形成人脸

The accuracy of correct prediction has become so accurate that recently at a Google Pattern Recognition Challenge, a deep net beat a human.

正确预测的准确性变得如此精确,以至于在最近的Google模式识别挑战赛上,深网击败了人类。

This idea of a web of layered perceptrons has been around for some time; in this area, deep nets mimic the human brain. But one downside to this is that they take long time to train, a hardware constraint

关于分层感知器网的想法已经存在了一段时间。 在这个区域,深网模仿了人类的大脑。 但是这样做的一个缺点是他们需要花费很长时间进行训练,这是硬件的限制

However recent high performance GPUs have been able to train such deep nets under a week; while fast cpus could have taken weeks or perhaps months to do the same.

但是,最近的高性能GPU在一周之内就能训练出如此深的网络。 而快速的cpus可能要花费数周甚至数月才能完成相同的操作。

选择深网 (Choosing a Deep Net)

How to choose a deep net? We have to decide if we are building a classifier or if we are trying to find patterns in the data and if we are going to use unsupervised learning. To extract patterns from a set of unlabelled data, we use a Restricted Boltzman machine or an Auto encoder.

如何选择深网? 我们必须决定是否要构建分类器,或者是否要尝试在数据中查找模式,以及是否要使用无监督学习。 要从一组未标记的数据中提取模式,我们使用Restricted Boltzman机器或自动编码器

Consider the following points while choosing a deep net −

选择深网时请考虑以下几点-

  • For text processing, sentiment analysis, parsing and name entity recognition, we use a recurrent net or recursive neural tensor network or RNTN;

    对于文本处理,情感分析,解析和名称实体识别,我们使用递归网络或递归神经张量网络或RNTN;

  • For any language model that operates at character level, we use the recurrent net.

    对于在字符级别运行的任何语言模型,我们都使用递归网络。

  • For image recognition, we use deep belief network DBN or convolutional network.

    对于图像识别,我们使用深度置信网络DBN或卷积网络。

  • For object recognition, we use a RNTN or a convolutional network.

    对于对象识别,我们使用RNTN或卷积网络。

  • For speech recognition, we use recurrent net.

    对于语音识别,我们使用递归网络。

In general, deep belief networks and multilayer perceptrons with rectified linear units or RELU are both good choices for classification.

通常,深度信念网络和带有整流线性单元或RELU的多层感知器都是分类的好选择。

For time series analysis, it is always recommended to use recurrent net.

对于时间序列分析,始终建议使用递归网络。

Neural nets have been around for more than 50 years; but only now they have risen into prominence. The reason is that they are hard to train; when we try to train them with a method called back propagation, we run into a problem called vanishing or exploding gradients.When that happens, training takes a longer time and accuracy takes a back-seat. When training a data set, we are constantly calculating the cost function, which is the difference between predicted output and the actual output from a set of labelled training data.The cost function is then minimized by adjusting the weights and biases values until the lowest value is obtained. The training process uses a gradient, which is the rate at which the cost will change with respect to change in weight or bias values.

神经网络已经存在了50多年了。 但是直到现在,它们才变得突出。 原因是他们很难训练。 当我们尝试使用一种称为向后传播的方法训练它们时,我们会遇到一个称为消失或爆炸梯度的问题。 在训练数据集时,我们会不断地计算成本函数,这是一组标记的训练数据的预测输出与实际输出之间的差值,然后通过调整权重和偏差值直至最低值来最小化成本函数获得。 训练过程使用梯度,即相对于重量或偏差值的变化,成本变化的速率。

受限制的Boltzman网络或自动编码器-RBN (Restricted Boltzman Networks or Autoencoders - RBNs)

In 2006, a breakthrough was achieved in tackling the issue of vanishing gradients. Geoff Hinton devised a novel strategy that led to the development of Restricted Boltzman Machine - RBM, a shallow two layer net.

2006年,在解决梯度消失问题上取得了突破。 杰夫·欣顿(Geoff Hinton)设计了一种新颖的策略,从而开发了浅层两层网络Restricted Boltzman Machine-RBM

The first layer is the visible layer and the second layer is the hidden layer. Each node in the visible layer is connected to every node in the hidden layer. The network is known as restricted as no two layers within the same layer are allowed to share a connection.

第一层是可见层,第二层是隐藏层。 可见层中的每个节点都连接到隐藏层中的每个节点。 该网络被称为受限网络,因为同一层内的任何两个层均不允许共享连接。

Autoencoders are networks that encode input data as vectors. They create a hidden, or compressed, representation of the raw data. The vectors are useful in dimensionality reduction; the vector compresses the raw data into smaller number of essential dimensions. Autoencoders are paired with decoders, which allows the reconstruction of input data based on its hidden representation.

自动编码器是将输入数据编码为矢量的网络。 它们创建原始数据的隐藏或压缩表示。 向量在降维方面很有用。 向量将原始数据压缩为较少的基本维数。 自动编码器与解码器配对,可以基于其隐藏表示重建输入数据。

RBM is the mathematical equivalent of a two-way translator. A forward pass takes inputs and translates them into a set of numbers that encodes the inputs. A backward pass meanwhile takes this set of numbers and translates them back into reconstructed inputs. A well-trained net performs back prop with a high degree of accuracy.

RBM是双向转换器的数学等效项。 前向传递获取输入并将其转换为一组数字,这些数字对输入进行编码。 同时,向后传递采用这组数字并将其转换回重构的输入。 训练有素的网具有很高的准确性,可以执行反向支撑。

In either steps, the weights and the biases have a critical role; they help the RBM in decoding the interrelationships between the inputs and in deciding which inputs are essential in detecting patterns. Through forward and backward passes, the RBM is trained to re-construct the input with different weights and biases until the input and there-construction are as close as possible. An interesting aspect of RBM is that data need not be labelled. This turns out to be very important for real world data sets like photos, videos, voices and sensor data, all of which tend to be unlabelled. Instead of manually labelling data by humans, RBM automatically sorts through data; by properly adjusting the weights and biases, an RBM is able to extract important features and reconstruct the input. RBM is a part of family of feature extractor neural nets, which are designed to recognize inherent patterns in data. These are also called auto-encoders because they have to encode their own structure.

在这两个步骤中,权重和偏见都起着至关重要的作用。 它们帮助RBM解码输入之间的相互关系,并确定哪些输入对于检测模式至关重要。 通过前进和后退,RBM被训练为使用不同的权重和偏差来重构输入,直到输入和此处的构建尽可能接近为止。 RBM的一个有趣方面是不需要标记数据。 事实证明,这对于诸如照片,视频,语音和传感器数据之类的现实世界数据集非常重要,而所有这些数据集往往都没有标签。 RBM无需人工人工标记数据,而是自动对数据进行分类。 通过适当地调整权重和偏差,RBM能够提取重要特征并重建输入。 RBM是特征提取器神经网络家族的一部分,其旨在识别数据中的固有模式。 这些也称为自动编码器,因为它们必须编码自己的结构。

深度信仰网络-DBN (Deep Belief Networks - DBNs)

Deep belief networks (DBNs) are formed by combining RBMs and introducing a clever training method. We have a new model that finally solves the problem of vanishing gradient. Geoff Hinton invented the RBMs and also Deep Belief Nets as alternative to back propagation.

深度信念网络(DBN)是通过结合RBM并引入聪明的训练方法而形成的。 我们有了一个新模型,最终解决了梯度消失的问题。 杰夫·欣顿(Geoff Hinton)发明了RBM和Deep Belief Nets作为反向传播的替代方法。

A DBN is similar in structure to a MLP (Multi-layer perceptron), but very different when it comes to training. it is the training that enables DBNs to outperform their shallow counterparts

DBN的结构与MLP(多层感知器)相似,但是在训练方面却大不相同。 正是这种培训使DBN能够胜过其浅薄的竞争对手

A DBN can be visualized as a stack of RBMs where the hidden layer of one RBM is the visible layer of the RBM above it. The first RBM is trained to reconstruct its input as accurately as possible.

DBN可以可视化为一堆RBM,其中一个RBM的隐藏层是其上方RBM的可见层。 训练了第一个RBM,以尽可能准确地重建其输入。

The hidden layer of the first RBM is taken as the visible layer of the second RBM and the second RBM is trained using the outputs from the first RBM. This process is iterated till every layer in the network is trained.

将第一RBM的隐藏层用作第二RBM的可见层,并使用第一RBM的输出来训练第二RBM。 重复此过程,直到网络中的每个层都经过培训为止。

In a DBN, each RBM learns the entire input. A DBN works globally by fine-tuning the entire input in succession as the model slowly improves like a camera lens slowly focussing a picture. A stack of RBMs outperforms a single RBM as a multi-layer perceptron MLP outperforms a single perceptron.

在DBN中,每个RBM都会学习整个输入。 当模型缓慢地改善,就像相机镜头缓慢地聚焦图像时,DBN通过连续地微调整个输入来全局地工作。 堆叠的RBM胜过单个RBM,因为多层感知器MLP胜过单个感知器。

At this stage, the RBMs have detected inherent patterns in the data but without any names or label. To finish training of the DBN, we have to introduce labels to the patterns and fine tune the net with supervised learning.

在这一阶段,RBM已检测到数据中的固有模式,但没有任何名称或标签。 要完成DBN的培训,我们必须在模式上引入标签,并在监督学习的基础上对网络进行微调。

We need a very small set of labelled samples so that the features and patterns can be associated with a name. This small-labelled set of data is used for training. This set of labelled data can be very small when compared to the original data set.

我们需要一小组标记的样本,以便将特征和样式与名称相关联。 这组小标签的数据用于训练。 与原始数据集相比,这组标记数据可能很小。

The weights and biases are altered slightly, resulting in a small change in the net's perception of the patterns and often a small increase in the total accuracy.

权重和偏差会略有变化,从而导致网络对模式的感知发生很小的变化,并且总精度往往会有所增加。

The training can also be completed in a reasonable amount of time by using GPUs giving very accurate results as compared to shallow nets and we see a solution to vanishing gradient problem too.

与浅网相比,使用GPU提供的结果也非常准确,因此训练也可以在合理的时间内完成,并且我们也看到了消失的梯度问题的解决方案。

生成对抗网络-GAN (Generative Adversarial Networks - GANs)

Generative adversarial networks are deep neural nets comprising two nets, pitted one against the other, thus the “adversarial” name.

生成对抗网络是包括两个网络的深层神经网络,其中两个网络相互抵触,因此称为“对抗性”名称。

GANs were introduced in a paper published by researchers at the University of Montreal in 2014. Facebook’s AI expert Yann LeCun, referring to GANs, called adversarial training “the most interesting idea in the last 10 years in ML.”

GAN在2014年由蒙特利尔大学的研究人员发表的一篇论文中进行了介绍。Facebook的AI专家Yann LeCun在提到GAN时称对抗训练为“过去10年来ML最有趣的想法”。

GANs’ potential is huge, as the network-scan learn to mimic any distribution of data. GANs can be taught to create parallel worlds strikingly similar to our own in any domain: images, music, speech, prose. They are robot artists in a way, and their output is quite impressive.

随着网络扫描学会模仿数据的任何分布,GAN的潜力巨大。 可以教导GAN在任何领域创建与我们自己惊人相似的平行世界:图像,音乐,语音,散文。 从某种意义上说,他们是机器人艺术家,他们的作品令人印象深刻。

In a GAN, one neural network, known as the generator, generates new data instances, while the other, the discriminator, evaluates them for authenticity.

在GAN中,一个神经网络(称为生成器)会生成新的数据实例,而另一个神经网络(鉴别器)会对它们的真实性进行评估。

Let us say we are trying to generate hand-written numerals like those found in the MNIST dataset, which is taken from the real world. The work of the discriminator, when shown an instance from the true MNIST dataset, is to recognize them as authentic.

假设我们正在尝试生成类似于MNIST数据集中的手写数字,这些数字取自现实世界。 鉴别器的工作是在显示来自真实MNIST数据集的实例时将其识别为真实的。

Now consider the following steps of the GAN −

现在考虑GAN的以下步骤-

  • The generator network takes input in the form of random numbers and returns an image.

    生成器网络以随机数的形式获取输入并返回图像。

  • This generated image is given as input to the discriminator network along with a stream of images taken from the actual dataset.

    将该生成的图像与从实际数据集中获取的图像流一起作为输入提供给鉴别器网络。

  • The discriminator takes in both real and fake images and returns probabilities, a number between 0 and 1, with 1 representing a prediction of authenticity and 0 representing fake.

    鉴别器同时获取真实图像和伪造图像,并返回概率,介于0和1之间的数字,其中1代表对真实性的预测,0代表伪造。

  • So you have a double feedback loop −

    所以你有一个双重反馈循环-

    • The discriminator is in a feedback loop with the ground truth of the images, which we know.

      鉴别器处于反馈循环中,具有图像的基本事实,这是我们所知道的。

    • The generator is in a feedback loop with the discriminator.

      发生器与鉴别器处于反馈回路中。

递归神经网络-RNN (Recurrent Neural Networks - RNNs)

RNNSare neural networks in which data can flow in any direction. These networks are used for applications such as language modelling or Natural Language Processing (NLP).

RNN Sare神经网络,数据可以在任何方向流动。 这些网络用于语言建模或自然语言处理(NLP)等应用。

The basic concept underlying RNNs is to utilize sequential information. In a normal neural network it is assumed that all inputs and outputs are independent of each other. If we want to predict the next word in a sentence we have to know which words came before it.

RNN的基本概念是利用顺序信息。 在正常的神经网络中,假定所有输入和输出彼此独立。 如果我们想预测句子中的下一个单词,我们必须知道哪个单词在它之前。

RNNs are called recurrent as they repeat the same task for every element of a sequence, with the output being based on the previous computations. RNNs thus can be said to have a “memory” that captures information about what has been previously calculated. In theory, RNNs can use information in very long sequences, but in reality, they can look back only a few steps.

RNN之所以称为递归,是因为它们对序列的每个元素重复相同的任务,并且输出基于先前的计算。 因此,可以说RNN具有“内存”,可以捕获有关先前计算出的信息。 从理论上讲,RNN可以按很长的顺序使用信息,但实际上,它们只能回顾几步。

Long short-term memory networks (LSTMs) are most commonly used RNNs.

长短期内存网络(LSTM)是最常用的RNN。

Together with convolutional Neural Networks, RNNs have been used as part of a model to generate descriptions for unlabelled images. It is quite amazing how well this seems to work.

RNN与卷积神经网络一起被用作模型的一部分,以生成未标记图像的描述。 令人惊讶的是,这看起来效果如何。

卷积深度神经网络-CNN (Convolutional Deep Neural Networks - CNNs)

If we increase the number of layers in a neural network to make it deeper, it increases the complexity of the network and allows us to model functions that are more complicated. However, the number of weights and biases will exponentially increase. As a matter of fact, learning such difficult problems can become impossible for normal neural networks. This leads to a solution, the convolutional neural networks.

如果我们增加神经网络中的层数以使其更深,则它会增加网络的复杂性,并允许我们对更复杂的函数进行建模。 但是,权重和偏差的数量将成倍增加。 实际上,对于正常的神经网络来说,学习这样的难题变得不可能。 这导致了一个解决方案,即卷积神经网络。

CNNs are extensively used in computer vision; have been applied also in acoustic modelling for automatic speech recognition.

CNN广泛用于计算机视觉; 也已经在用于自动语音识别的声学建模中应用。

The idea behind convolutional neural networks is the idea of a “moving filter” which passes through the image. This moving filter, or convolution, applies to a certain neighbourhood of nodes which for example may be pixels, where the filter applied is 0.5 x the node value −

卷积神经网络背后的思想是穿过图像的“运动滤波器”的思想。 此移动滤镜或卷积应用于节点的某个邻域,例如可以是像素,其中应用的滤镜为节点值的0.5 x-

Noted researcher Yann LeCun pioneered convolutional neural networks. Facebook as facial recognition software uses these nets. CNN have been the go to solution for machine vision projects. There are many layers to a convolutional network. In Imagenet challenge, a machine was able to beat a human at object recognition in 2015.

著名的研究人员Yann LeCun开创了卷积神经网络。 Facebook作为面部识别软件使用了这些网络。 CNN已经成为机器视觉项目的解决方案。 卷积网络有很多层。 在Imagenet挑战中,一台机器在2015年的物体识别中能够击败人类。

In a nutshell, Convolutional Neural Networks (CNNs) are multi-layer neural networks. The layers are sometimes up to 17 or more and assume the input data to be images.

简而言之,卷积神经网络(CNN)是多层神经网络。 图层有时最多17个或更多,并假设输入数据为图像。

CNNs drastically reduce the number of parameters that need to be tuned. So, CNNs efficiently handle the high dimensionality of raw images.

CNN大大减少了需要调整的参数数量。 因此,CNN可以有效处理原始图像的高维度。

翻译自: https://www.tutorialspoint.com/python_deep_learning/python_deep_learning_deep_neural_networks.htm

深度神经网络回归

锐单商城拥有海量元器件数据手册IC替代型号,打造电子元器件IC百科大全!

相关文章