Automated machine learning has been successful in supporting data scientists in selecting appropriate machine learning architectures, as well as optimizing hyperparameters. By doing so, data scientists can focus their attention on more important tasks. Partially thanks to the TAILOR project, in which Leiden University and JSI have successfully collaborated, we have seen a demand on AutoML techniques to not only provide solutions that are accurate, but also those that are trustworthy according to several relevant criteria. In particular neural networks are known to be vulnerable to adversarial attacks, whereas robustness (against such attacks) is an important criterion of trustworthiness. In this talk, I will summarize various projects we have done through this collaboration, that envision AutoML solutions that specifically address robustness of neural networks.