NeuralNetwork
Basics
To build a neural network with LearningHorse, use the NetWork type.
LearningHorse.NeuralNetwork.NetWork
— TypeNetWork(layers...)
Connect multiple layers, and build a NeuralNetwork. NetWork also supports index. You can also add layers later using the add_layer!() Function.
Example
julia> N = NetWork(Dense(10=>5, relu), Dense(5=>1, relu))
julia> N[1]
Dense(IO:10=>5, σ:relu)
Layers
LearningHorse.NeuralNetwork.Conv
— TypeConv(kernel, in=>out, σ; stride = 1, pading = 0, set_w = "Xavier")
This is the traditional convolution layer. kernel
is a tuple of integers that specifies the kernel size, it must have one or two elements. And, in
and out
specifies number of input and out channels.
The input data must have a dimensions WHCB(weight, width, channel, batch). If you want to use a data which has a dimentions WHC, you must be add a dimentioins of B.
stride
and padding
are single integers or tuple(stride is tuple of 2 elements, padding is tuple of 2 elements), and if you specifies KeepSize
to padding, we adjust sizes of input and return a matrix which has the same sizes. set_w
is Xavier
or He
, it decide a method to create a first parameter. This parameter is the same as Dense()
.
Example
julia> C = Conv((2, 2), 2=>2, relu)
Convolution(k:(2, 2), IO:2 => 2, σ:relu)
julia> C(rand(10, 10, 2, 5)) |> size
(9, 9, 2, 5)
When you specidies same
to padding
, in some cases, it will be returned one size smaller. Because of its expression.
julia> C = Conv((2, 2), 2=>2, relu, padding = KeepSize)
Convolution(k:(2, 2), IO:2 => 2, σ:relu
julia> C(rand(10, 10, 2, 5)) |> size
(9, 9, 2, 5)
LearningHorse.NeuralNetwork.Dense
— TypeDense(in=>out, σ; set_w = "Xavier", set_b = zeros)
Crate a traditinal Dense
layer, whose forward propagation is given by: y = σ.(W * x .+ b) The input of x
should be a Vactor of length in
, (Sorry for you can't learn using batch. I'll implement)
Example
julia> D = Dense(5=>2, relu)
Dense(IO:5=>2, σ:relu)
julia> D(rand(Float64, 5)) |> size
(2,)
LearningHorse.NeuralNetwork.Dropout
— TypeDropout(p)
This layer dropout the input data.
Example
julia> D = Dropout(0.25) Dropout(0.25)
julia> D(rand(10)) 10-element Array{Float64,1}: 0.0 0.3955865029078952 0.8157710047424143 1.0129613533211907 0.8060508293474877 1.1067504108970596 0.1461289547292684 0.0 0.04581776023870532 1.2794087133638332
LearningHorse.NeuralNetwork.Flatten
— TypeFlatten()
This layer change the dimentions Image to Vector.
Example
julia> F = Flatten()
Flatten(())
julia> F(rand(10, 10, 2, 5)) |> size
(1000, )
Optimizers
LearningHorse.NeuralNetwork.Descent
— TypeDescent(η=0.1)
Basic gradient descent optimizer with learning rate η
.
Parameters
- learning rate :
η
Example
LearningHorse.NeuralNetwork.Momentum
— TypeMomentum(η=0.01, α=0.9, velocity)
Momentum gradient descent optimizer with learning rate η
and parameter of velocity α
.
Parameters
- learning rate :
η
- parameter of velocity :
α
Example
LearningHorse.NeuralNetwork.AdaGrad
— TypeAdaGrad(η = 0.01)
Gradient descent optimizer with learning rate attenuation.
Parameters
- η : initial learning rate
Examples
LearningHorse.NeuralNetwork.Adam
— TypeAdam(η=0.01, β=(0.9, 0.99))
Gradient descent adaptive moment estimation optimizer.
Parameters
- η : learning rate
- β : Decay of momentums
Examples