It was a brisk 9 AM morning in Bangalore as cybersecurity practitioners gathered to demystify the enigmatic technology called Generative AI (GenAI).
My fellow presenter, Vignesh, traveled from Coimbatore to join me in explaining GenAI's applications in cybersecurity.
The session was great, fueled by the audience's energy, attentiveness, and curiosity. This diverse group, representing various cybersecurity fields, shared a common goal: understand how GenAI works under the hood. It was truly inspiring to witness their hunger for knowledge and their desire for a deeper exploration of the concepts covered in the workshop.
We successfully addressed the key question: How does GenAI differ from traditional Deep Learning, and what new capabilities does it possess that have the potential to revolutionize multiple fields?
For your reference, here is the summary of the key learning.
Limitations of AI: GenAI and AI can not go beyond the limitations of Computer Science. Most real-world problems are computationally complex (NP-hard, NP-Complete, PSACE ) and cannot be solved perfectly by any computation algorithm including AI and GenAI. For example, Breaking encryption (e.g., RSA), Finding all possible Attack Paths in Hacking etc.
Turing Test: Developed by Alan Turing, this test measures a machine's ability to exhibit intelligent behavior indistinguishable from a human.
Evolution of AI: The workshop traced the evolution of AI from single-node neural networks (Perceptrons) to rule-based systems, neural networks, deep learning (post-2010), and finally, Large Language Models (LLMs) and GenAI.
Attention Mechanism: This key difference between RNNs (deep learning) and LLMs allows machines to understand the contextual meaning of words, sentences, and paragraphs using a technique/trick called (self and cross) Attention.
Transformer Architecture: LLMs predict the next token (word, pixel, etc.) using an encoder-decoder architecture called a transformer. The encoder "understands" the meaning of the input, while the decoder generates the output (e.g., continuing a summary).
Example: Consider the phrases "I am doing Work Out," "I am doing work from home," and "I am doing work since yesterday." The meaning of "work" depends on surrounding words like "Out" and "from home."
Transformer Architecture Breakdown: The architecture involves tokenization, embedding, positional embedding, encoder layers (neural networks), and decoder layers (neural networks).
Introduction to Tools and Resources: The workshop provided an overview of Hugging Face, pre-trained models like GPT-2, Databricks' DBRX, Kaggle platforms, and running GPT models on Kaggle notebooks.
GenAI Model Parameters: Every GenAI model has parameters like context length, temperature, and maximum tokens, affecting its behavior.
Security Applications of GenAI:
Security Operations (SecOps): Automating SOAR (Security Orchestration, Automation, and Response) workflows with GenAI-powered SOC analysts.
Penetration Testing (PenTesting): Leveraging GenAI for payload and exploit generation.
Looking Forward:
We will conduct more follow-up Hands-on Sessions on the following topics.
GenAI Security: Understand model security, GenAI application security, and data security concerns.
Hands on Red Teaming of LLMs: Exploring adversarial testing of LLMs to identify vulnerabilities.
Hands on Penetration Testing GenAI Applications
Security Controls: Implementing safeguards like guardrails and AI firewalls to mitigate risks associated with GenAI.
Keep Learning !!