<p><span style=background-color: rgba(0 0 0 0); color: rgba(0 0 0 1)>Deepfakes misinformation AI slop... these are all examples of AI producing content that can -and does- manipulate persuade or trick people in one way or another. This book takes a look at AI from another perspective: by taking insights from offensive security LLM Red Teaming and penetration testing we learn about how AI itself is tricked and deceived - providing us with practical tools to test modern AI systems.&nbsp;</span></p><p></p><p><span style=background-color: rgba(0 0 0 0); color: rgba(0 0 0 1)>This book will walk you through the fundamentals of LLM internals discuss important ethical and philosophical frameworks to work under provide taxonomies and templates for testing LLM systems and outline&nbsp;automated approaches to LLM Red Teaming and example environments for testing your skills.&nbsp;</span></p><p><span style=background-color: rgba(0 0 0 0); color: rgba(0 0 0 1)>With its focus on LLM red teaming this book is written for security practitioners looking to get into LLM Red Teaming individuals interested in better understanding how LLMs tick or as a pocket guide for AI security researchers.&nbsp;&nbsp;</span></p><p></p><p><span style=background-color: rgba(0 0 0 0); color: rgba(0 0 0 1)>Built off of the author's experience from working on a PhD at the intersection of social science and machine learning and training in AI Red Teaming machine learning and artificial intelligence ethics and human rights.&nbsp;</span></p><p></p><p class=ql-align-center><em style=background-color: rgba(0 0 0 0); color: rgba(0 0 0 1)>AI is a tool - Just like any tool it can be used to build cities or break them down. Choose the former.</em></p>
Piracy-free
Assured Quality
Secure Transactions
Delivery Options
Please enter pincode to check delivery time.
*COD & Shipping Charges may apply on certain items.