Adversarial Attacks against AI-Driven Experimental Peptide Design Workflows
TimeFriday, 19 November 202110:50am - 11:10am CST
DescriptionArtificial intelligence/ machine learning (AI/ML) techniques are fueling a revolution in how scientific experiments are designed, implemented and automated. Specifically, increasing high-bandwidth instruments coupled to new hardware and software systems can significantly improve the throughput of experimental results, while AI/ML techniques can provide insights into novel science and theories that were hitherto inaccessible. Despite recent progress in such "self-driving labs'', these automated platforms are susceptible to traditional cyber-security attacks. Using a motivating example of an automated approach to design antimicrobial peptides (AMP), our position paper seeks to demonstrate how adversarial attacks may affect the execution of such experimental workflows. We highlight important problems in adversarial robustness that may need to be resolved in order to establish a trustworthy and safe AI-driven AMP synthesis system.