RECENT YEARS have seen a boom in biometric security systems—identification measures based on a person’s individual biology—from unlocking smartphones, to automating border controls. As this technology becomes more prevalent, some cybersecurity researchers are worried about how secure biometric data is—and the risk of spoofs. If generative AI becomes so powerful and easy-to-use that deepfake audio and video could hack into our security systems, what can be done?
近年来,基于人的个人生物学的识别措施,从解锁智能手机到自动化边境控制,已经实现了生物识别安全系统的繁荣。随着该技术变得越来越普遍,一些网络安全研究人员担心生物识别数据的安全性和欺骗风险。如果生成的AI变得如此强大且易于使用,以至于DeepFake Audio和Video可以入侵我们的安全系统,该怎么办?