Favorites
b/training4allbyForeverloving

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes

This post was published 2 years ago. Download links are most likely obsolete. If that's the case, try asking the uploader to re-upload.

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes

LinkedIn Learning
Duration: 1h 11m | .MP4 1280x720, 30 fps(r) | AAC, 48000 Hz, 2ch | 713 MB
Genre: eLearning | Language: English

From predicting medical outcomes to managing retirement funds, we put a lot of trust in machine learning (ML) and artificial intelligence (AI) technology, even though we know they are vulnerable to attacks, and that sometimes they can completely fail us. In this course, instructor Diana Kelley pulls real-world examples from the latest ML research and walks through ways that ML and AI can fail, providing pointers on how to design, build, and maintain resilient systems.

Learn about intentional failures caused by attacks and unintentional failures caused by design flaws and implementation issues. Security threats and privacy risks are serious, but with the right tools and preparation you can set yourself up to reduce them. Diana explains some of the most effective approaches and techniques for building robust and resilient ML, such as dataset hygiene, adversarial training, and access control to APIs.

Homepage

Screenshots

Machine Learning and Artificial Intelligence Security Risk: Categorizing Attacks and Failure Modes

No comments have been posted yet. Please feel free to comment first!

    Load more replies

    Join the conversation!

    Log in or Sign up
    to post a comment.