A Summer of Internship: Week 6-7 at Adebali Labs

A Summer of Internship: Week 6-7 at Adebali Labs

Week 6: Diving into Deep Learning 🤖🧬

Hello again, dear readers! Week 6 at Adebali Labs took a thrilling turn. While our initial weeks saw us deeply entrenched in traditional statistical analyses, this week, we took a leap into the future with neural networks. Let’s unfold this week's neural adventure!

Unraveling the Power of Neural Networks 🧠🔍

Neural networks, inspired by our very own brain's workings, can model intricate relationships in data, taking us beyond the realm of linear models. The excitement in the lab was palpable as we embarked on this innovative approach.

Our strategy was thorough:

  1. Data Dynamics: Importing datasets and crafting a new 'length' feature.
  2. Data Prep: Preprocessing is a must! Our chosen MinMaxScaler ensured the data was in a format palatable for our neural network.
  3. Model Mechanics: Constructed using TensorFlow's Keras, our multi-layer perceptron was a beauty, tailored to handle tabular data.
  4. Model Training: Conditioning our model in bite-sized batches, we allowed it to learn, adapt, and refine.
  5. Results & Review: Employing metrics like MSE and R^2, we evaluated our model's prowess.

As we wrapped up Week 6, we had successfully shifted gears, welcoming the potential of deep learning. The ability to predict genetic repair values using neural networks introduced fresh perspectives, setting the stage for more advanced explorations.


Week 7: Tuning the Neural Network Symphony 🎵🤖

Welcome back! If Week 6 was about introducing neural networks, Week 7 was all about perfecting our performance. Like tuning an instrument to get the sweetest sound, our week was consumed with refining our model. Dive in to witness our progression!

Crafting the Creme de la Creme 🌟

Out of the many models put to test, one stood out with an R^2 score that stole the spotlight: a whopping 0.8785!

  • Layers and Units: This model was a marvel with three potent layers, each packed with multiple units.
  • Activation Alchemy: With ReLU at its heart, this model aimed to avoid the pitfalls of the vanishing gradient.
  • Tuning Techniques: RMSprop, our choice of optimizer, proved its worth by adapting swiftly.
  • Hyperparameter Help: With KerasTuner in our toolkit, we found the best settings to allow our model to thrive.
  • Early Bird: EarlyStopping, though not used for our top model, deserves a mention for its strategic prowess in preventing overfitting.
  • Strength through Stability: Dropout added resilience to our model, reinforcing its robustness.

Beyond just the model architecture, our week underscored the significance of data preprocessing. By eliminating extreme values using the IQR method, we bolstered our model's performance.

In conclusion, Week 7 was all about finesse. Through diligent iterations, evaluations, and a touch of neural magic, we sculpted a neural network model that not only promises accuracy but also embodies the culmination of our collective expertise. As we march forward, the promise of leveraging AI in the fascinating domain of genetics grows ever brighter.


Till the next post – keep exploring✨🔍