There is growing interest in automating designing good neural network architectures. The NAS methods proposed recently have significantly reduced the architecture search cost by sharing parameters, but there is still a challenging problem in designing search space. The existing operation-level architecture search methods require a large amount of computing power or designing the search space of operations very carefully. In this paper, we investigate the possibility of achieving competitive performance with them only using a small amount of computing power and without designing search space carefully. We propose TENAS using Taylor expansion and only a fixed type of operation. The resulting architecture is sparse in terms of channel and has different topology at different cells. The experimental results for CIFAR-10 and ImageNet show that a fine-granular and sparse model searched by TENAS achieves very competitive performance with dense models searched by the existing methods. Author