Papers Implemented

A collection of research papers I’ve reproduced, explored, or extended through code.

Attention Is All You Need

Vaswani et al., 2017

Implemented a scaled-down Transformer in PyTorch for machine translation. Focused on positional encodings and multi-head attention.

NLP Deep Learning

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

Dosovitskiy et al., 2020

Re-implemented Vision Transformer using PyTorch. The model splits input images into 16x16 patches, encodes them with positional embeddings, and processes them through transformer encoder layers. Evaluated performance on CIFAR-10 and ImageNet subsets.

Computer Vision Deep Learning Transformer

Deep Residual Learning for Image Recognition (ResNet)

He et al., 2015

Reproduced ResNet-18 and ResNet-34 architectures from scratch using PyTorch to understand skip connections and vanishing gradient mitigation. Trained the models on CIFAR-10 and evaluated top-1 accuracy, comparing performance with baseline CNNs.

Computer Vision Deep Learning CNN