A perfect shape for an AI model can be found without ever having to train it.
Designing neural network architectures is a slow, expensive process of training and pruning. This method uses stochastic exploration of random weights to find the most efficient topology before any learning happens. These Random Cloud models match the performance of systems that were painstakingly trained and optimized. It suggests that the structure of an AI is more important than the specific numbers in its weights. This approach could drastically reduce the cost and energy required to build high-performance AI models. The future of AI design might be found through random exploration rather than brute-force training.
Random Cloud: Finding Minimal Neural Architectures Without Training
arXiv · 2604.26830
I propose the \emph{Random Cloud} method, a training-free approach to neural architecture search that discovers minimal feedforward network topologies through stochastic exploration and progressive structural reduction. Unlike post-training pruning methods that require a full train-prune-retrain cycle, this method evaluates randomly initialized networks without backpropagation, progressively reduces their topology, and only trains the best minimal candidate at the end. I evaluate on 7 classifica