Example output
- Input layer: Convolutional layer (224, 224, 3)
- Hidden layers:
- Convolutional layer, 32 units, ReLU activation function
- Max pooling layer
- Convolutional layer, 64 units, ReLU activation function
- Max pooling layer
- Convolutional layer, 128 units, ReLU activation function
- Max pooling layer
- Flatten layer
- Dense layer, 256 units, ReLU activation function
- Dropout layer
- Dense layer, 128 units, ReLU activation function
- Dropout layer
- Output layer: Dense layer (units=10, activation function=softmax)
- Loss function: categorical cross-entropy
- Optimizer: Adam optimizer with default parameters (learning rate=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-07)