CNN-Syllabus
2021-09-08 19:23:33 1 举报
AI智能生成
CNN前世今生脑图
作者其他创作
大纲/内容
Use Convolution Operation in place of General Matrix Multiplication
Neural Network
Deal with Visual Imagery Originally
Definition
2018
Channel Boosted CNN
Channel Boosting
CBAM
Attention
Residual Attention Module
SE Net
Feature Map Exploitation
2017
Pyramidal Net
Width Exploitation
Poly Net
Wide ResNet
ResNext
Width Exploitation
FractalNet
Depth Revolution
2016
DenseNet
CMPE-SE
Feature Map Exploitation
Multi-Path Connectivity
2015
Highway Net
Skip Connections
2013
zfNet
Feature visualisation
Parameter Optimisztion
Dense Net
Depth Exploitation
Spatial Exploitation
2010
Programming ImageNet
2007
NVIDIA
2006
GPU Applied
Max Pooling
Early 2000
CNN Stagnation
ResNet18
GCN
Deeplab
DeconvNet
FCN
SegNet
https://zhuanlan.zhihu.com/p/39430439
ENet
ResNet34
ResNet50
ResNet101
ResNet152
ResNet
Skip Connection
Depth Revolution
VGG-19
VGG-16
Effective Receptive Field (Small Size Filters)
2014
VGG
Inception-ResNet-v2
Inception-ResNet-v1
Inception V4
Inception V3
Factorization
Inception V2
BottleNeck
Parallelism
Inception Block
Inception V1
GoogleNet
Squeeze Net
Shuffile Net
2012
AlexNet
1998
Lenet5
1989
ConvNet
1979
Neocognitron
Structure Timeline
PyTorch
Keras
Tensorflow
Programming
overfitting
子主题
Problems
stochastic pooling
average Pooling
max Pooling
Pooling
Pooling Size
Mask Matrix
Feature Map
an output layer
hidden layers
an input layer
Convolutional layers
Filter
2D
Kernel Size
Stride
Padding
Dilation
Weights
Parameters
Parameter sharing
Local connectivity
Spatial arrangement
Early stopping
max norm constraints
weight decay
Added regularizer
Receptive Field
CPU
Xeon Phi
GPU
Devices
Fine-tuning
Human interpretable explanations
Residual Connection
Downsamping
Upsampling
Feature Invariant
Local Response Normalization
Normalisztion
Data Augmentation
Exponentially weighted average
bias correction in exponentially weighted average
momentum
Nesterov Momentum
Adagrad
Adadelta
RMSprop
Adam
Optimizer
Convergence
Transfer
Batch gradient descent
Mini-batch gradient descent
stochastic gradient descent
Gradient Descent
ReLU
ReLU6
SoftPlus
SoftMax
Tanh
Sigmoid
Bias_add
before dropout
after dropout
外框
dropout rate
rescale rate = 1 / (1 - dropout rate)
rescale rate
Dropout (Neuro)
DropConnect
playground
Activation Function
DepthConcat
Forward
Backpropagation
Buzz words
Image recognition
Video analysis
Natural language processing
Anomaly Detection
Drug discovery
Health risk assessment
Biomarkers of aging discovery
Checkers game
Go
Time series forecasting
Cultural Heritage and 3D-datasets
Applications
Pooling layer
Loss layer
Activation layer
Fully connected layer
CNN神经网络
0 条评论
回复 删除
下一页