Most people think machine learning is about math. After spending months building a convolutional neural network for my thesis — classifying vehicles into five categories for a cashless toll gate system — I learned it is mostly about discipline, patience, and debugging in ways that feel completely different from traditional software.
In regular software, bugs are usually traceable. A variable holds the wrong value, a function returns null, a network request fails. With CNNs, bugs are silent. Your model trains, your loss curve looks reasonable, and then your validation accuracy stalls at 62% with no clear explanation. Is it the architecture? The learning rate? The dataset quality? The preprocessing pipeline? You rarely know which one broke things — so you change one variable at a time and wait hours for a result.
That process taught me to be more systematic than I had ever been. I started keeping a training log — not just loss and accuracy, but every hyperparameter change, every dataset modification, and why I made each decision. That habit has since carried over into how I approach regular software projects.
Before I wrote a single model layer, I spent weeks cleaning and preparing the COCO dataset subset for vehicle categories. Annotation inconsistencies, class imbalance, image resolution variance — all of it compounds into noise that your model will faithfully learn. I started treating the dataset like source code: version-controlled, documented, and validated before any training run.
This changed how I think about data in all software. Whether it is a database schema, an API response shape, or a form input — the quality of data flowing through a system determines the quality of the output. Garbage in, garbage out applies everywhere.
Achieving high training accuracy felt like a milestone. Then I ran inference on real tollgate footage and watched it misclassify a motorcycle as a tricycle under rain conditions. That gap between benchmark accuracy and production performance is where real engineering lives. It pushed me to evaluate differently — not just overall accuracy, but precision per class, confusion matrices, and edge cases that the benchmark never surfaced.
Building that CNN changed how I approach all software. I think more carefully about data before code, I separate concerns more strictly, and I have a lot more patience for things that fail without obvious error messages. Machine learning taught me to be a more methodical engineer — and that is a skill that transfers everywhere.