Skip to contents

Functions to generate nn_module (language) expression

Usage

ffnn_generator(
  nn_name = "DeepFFN",
  hd_neurons,
  no_x,
  no_y,
  activations = NULL,
  output_activation = NULL,
  bias = TRUE
)

rnn_generator(
  nn_name = "DeepRNN",
  hd_neurons,
  no_x,
  no_y,
  rnn_type = "lstm",
  bias = TRUE,
  activations = NULL,
  output_activation = NULL,
  bidirectional = TRUE,
  dropout = 0,
  ...
)

Arguments

nn_name

Character. Name of the generated RNN module class. Default is "DeepRNN".

hd_neurons

Integer vector. Number of neurons in each hidden RNN layer.

no_x

Integer. Number of input features.

no_y

Integer. Number of output features.

activations

Activation function specifications for each hidden layer. Can be:

  • NULL: No activation functions.

  • Character vector: e.g., c("relu", "sigmoid").

  • List: e.g., act_funs(relu, elu, softshrink = args(lambd = 0.5)).

  • activation_spec object from act_funs().

If the length of activations is 1L, this will be the activation throughout the architecture.

output_activation

Optional. Activation function for the output layer. Same format as activations but should be a single activation.

bias

Logical. Whether to use bias weights. Default is TRUE

rnn_type

Character. Type of RNN to use. Must be one of "rnn", "lstm", or "gru". Default is "lstm".

bidirectional

Logical. Whether to use bidirectional RNN layers. Default is TRUE.

dropout

Numeric. Dropout rate between RNN layers. Default is 0.

...

Additional arguments (currently unused).

Value

A torch module expression representing the FFNN.

A torch module expression representing the RNN.

Details

The generated FFNN module will have the specified number of hidden layers, with each layer containing the specified number of neurons. Activation functions can be applied after each hidden layer as specified. This can be used for both classification and regression tasks.

The generated module properly namespaces all torch functions to avoid polluting the global namespace.

The generated RNN module will have the specified number of recurrent layers, with each layer containing the specified number of hidden units. Activation functions can be applied after each RNN layer as specified. The final output is taken from the last time step and passed through a linear layer.

The generated module properly namespaces all torch functions to avoid polluting the global namespace.

Feed-Forward Neural Network Module Generator

The ffnn_generator() function generates a feed-forward neural network (FFNN) module expression from the torch package in R. It allows customization of the FFNN architecture, including the number of hidden layers, neurons, and activation functions.

Recurrent Neural Network Module Generator

The rnn_generator() function generates a recurrent neural network (RNN) module expression from the torch package in R. It allows customization of the RNN architecture, including the number of hidden layers, neurons, RNN type, activation functions, and other parameters.

Examples

# Generate an MLP module with 3 hidden layers
ffnn_mod = ffnn_generator(
    nn_name = "MyFFNN",
    hd_neurons = c(64, 32, 16),
    no_x = 10,
    no_y = 1,
    activations = 'relu'
)

# Evaluate and instantiate
model = eval(ffnn_mod)()
#> Error in (function (size, options, memory_format) {    .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`,         size, options, memory_format)})(size = list(64, 10), options = list(dtype = NULL, layout = NULL,     device = NULL, requires_grad = FALSE), memory_format = NULL): Lantern is not loaded. Please use `install_torch()` to install additional dependencies.

# More complex: With different activations
ffnn_mod2 = ffnn_generator(
    nn_name = "MyFFNN2",
    hd_neurons = c(128, 64, 32),
    no_x = 20,
    no_y = 5,
    activations = act_funs(
        relu,
        selu,
        sigmoid
    )
)

# Even more complex: Different activations and customized argument
# for the specific activation function
ffnn_mod2 = ffnn_generator(
    nn_name = "MyFFNN2",
    hd_neurons = c(128, 64, 32),
    no_x = 20,
    no_y = 5,
    activations = act_funs(
        relu,
        selu,
        softshrink = args(lambd = 0.5)
    )
)

# Customize output activation (softmax is useful for classification tasks)
ffnn_mod3 = ffnn_generator(
    hd_neurons = c(64, 32),
    no_x = 10,
    no_y = 3,
    activations = 'relu',
    output_activation = act_funs(softmax = args(dim = 2L))
)

# Basic LSTM with 2 layers
rnn_mod = rnn_generator(
    nn_name = "MyLSTM",
    hd_neurons = c(64, 32),
    no_x = 10,
    no_y = 1,
    rnn_type = "lstm",
    activations = 'relu'
)

# Evaluate and instantiate
model = eval(rnn_mod)()
#> Error in (function (size, options, memory_format) {    .Call(`_torch_cpp_torch_namespace_empty_size_IntArrayRef`,         size, options, memory_format)})(size = list(256, 10), options = list(dtype = NULL, layout = NULL,     device = NULL, requires_grad = FALSE), memory_format = NULL): Lantern is not loaded. Please use `install_torch()` to install additional dependencies.

# GRU with different activations
rnn_mod2 = rnn_generator(
    nn_name = "MyGRU",
    hd_neurons = c(128, 64, 32),
    no_x = 20,
    no_y = 5,
    rnn_type = "gru",
    activations = act_funs(relu, elu, relu),
    bidirectional = FALSE
)

if (FALSE) { # \dontrun{
# Parameterized activation and dropout
# (Will throw an error due to `nnf_tanh()` not being available in `{torch}`)
rnn_mod3 = rnn_generator(
    hd_neurons = c(100, 50, 25),
    no_x = 15,
    no_y = 3,
    rnn_type = "lstm",
    activations = act_funs(
        relu,
        leaky_relu = args(negative_slope = 0.01),
        tanh
    ),
    bidirectional = TRUE,
    dropout = 0.3
)
} # }