Neural Architecture Search without Training using Sensitivity


To automate the design of deep neural networks, neural architecture search (NAS) is a widely used technique. NAS algorithms are mostly based on reinforcement learning methods which require training a large number of models on large datasets. This is indeed very costly both in terms of computation time and resources. A very recent work ([1]) proposed a measure, based on linear maps induced by data points, to estimate the trained network’s final test performance, before training. It was shown in [2] that sensitivity is also a good estimator of the final test performance before training the models.

In this project, we aim to study the sensitivity introduced in [2] in the NAS-Bench-201 search space (a similar setting as [1]). The project requires a literature review over NAS algorithms. The sensitivity metric should then be compared to the state-of-the-art NAS benchmarks in terms of cost and performance.

Required skills:

  • Solid knowledge in Machine Learning
  • Strong Python programming skills
  • (preferred) Experience with Pytorch or Tensorflow

To apply please send your CV and grades to Mahsa Forouzesh.

[1] Mellor et. al., Neural Architecture Search without Training (

[2] Forouzesh et. al., Generalization Comparison of Deep Neural Networks via Output Sensitivity (