Deep Learning models based on the Transformer architecture have revolutionized the state of the art of NLP tasks. As English is the language in which most significant advances are made, languages like Spanish require specific training, but this training has a computational cost so high that only big corporations with servers and GPUs are capable of generating them. This work has explored how to create a model for the Spanish language from a big multilingual model. Specifically, a model aimed at creating text summarization, a very common task in NLP. The results, concerning the quality of the summarization (ROUGE score), point out that these small models, for a specific language, achieve similar results than much bigger models with a reasonable training in terms of time required and computational power and are significantly faster at generating the summaries.