SO Basics of Neural Networks 2025
from
Thursday, 16 October 2025 (09:00)
to
Friday, 17 October 2025 (17:00)
Monday, 13 October 2025
Tuesday, 14 October 2025
Wednesday, 15 October 2025
Thursday, 16 October 2025
09:00
Session 1: Deep Learning fundamentals
-
Francisco Eduardo Sanchez Karhunen
(
Universidad de Sevilla
)
Session 1: Deep Learning fundamentals
Francisco Eduardo Sanchez Karhunen
(
Universidad de Sevilla
)
09:00 - 12:00
Room: Salón de Actos
- Roots of deep learning techniques. Reasons for layer stacking. Role of weights and activation functions. Layer as a map between representation spaces. Model parameters. Basic structure for classification tasks. Network training as an optimization problem. Weight initialization techniques. Typical loss functions and optimizers. Learning-rate scheduling. - Hands-on lab: Build from scratch a basic multilayer neural network using the Tensorflow-2 library. Sequential mode of layer stacking. Model training to tackle a classical basic image classification problem.
14:00
Session 2: Convolutional Neural Networks fundamentals
-
Francisco Eduardo Sanchez Karhunen
(
Universidad de Sevilla
)
Session 2: Convolutional Neural Networks fundamentals
Francisco Eduardo Sanchez Karhunen
(
Universidad de Sevilla
)
14:00 - 18:30
Room: Salón de Actos
- Drawbacks of classical multilayer networks in image classification tasks. Human brain image handling. Convolutional layers: padding and stride. Types of pooling layer. Kernels for feature map extraction. Kernel stacking. CNNs as an extension of classical stacked layer models. Top layers in CNNs. - Hands-on lab. Build from scratch a basic CNN for image classification using the Galaxy10 dataset.
Friday, 17 October 2025
09:00
Session 3: Practical considerations in real-world CNNs
-
Francisco Eduardo Sanchez Karhunen
(
Universidad de Sevilla
)
Session 3: Practical considerations in real-world CNNs
Francisco Eduardo Sanchez Karhunen
(
Universidad de Sevilla
)
09:00 - 13:00
Room: Salón de Actos
- Overfitting in CNNs. Techniques for overfitting reduction: data augmentation and drop-out. Types of data augmentation. Drop-out rates. Transfer learning: concept and usage. Top Pre-Trained models for image classification. Handling large image datasets: ImageDataGenerators. - Hands-on lab: Use of ImageDataGenerators combined with realistic folder structures in image classification problems. Inclusion of data augmentation in our preprocessing pipelines. Add drop-out layers to the CNN design in session 2. Use of transfer learning in a real situation.
14:00
Session 4: Evaluation
-
Francisco Eduardo Sanchez Karhunen
Session 4: Evaluation
Francisco Eduardo Sanchez Karhunen
14:00 - 17:00
Room: Salón de Actos