Infosecurity Europe 2024 has wrapped up, and it was as bustling and productive as ever!
This event remains the premier destination for exploring the latest trends and innovations in the cyber security industry, forging and strengthening relationships with vendors, partners, and, most importantly, our customers.
Key Takeaways
AI Steals the Spotlight
From the conversations we were having and the focus around the exhibition hall, it’s clear that the industry continues to focus heavily on operational security tooling and services through the abundance of MDR/EDR/XDR messaging. This aligns with our own experiences of customer demand and requirements and the growth of our own MDR portfolio. SIEM is still a key consideration, which, along with EDR, XDR, Vulnerability Scanning, and Incident Response, are the core components of an overarching MDR service – All services we deliver ourselves, through our customer-facing Security Operations Centre (SOC).
Over and above that, Artificial Intelligence (AI) was undeniably the star of the show. This is what people are most keen to understand in terms of the security implications and of course, how it can help.
Navigating the AI Terminology Overload
AI and AI-related terms and technologies such as Generative AI, Deepfakes, Predictive AI, Machine Learning, LLMs – AI itself, and AI-related terms and technologies were omnipresent. Like most new development areas in the IT industry, large-scale hype can be overwhelming Here is our concise aide-memoir:
- Artificial Intelligence (AI): Computer systems performing tasks typically requiring human intelligence, such as visual perception, speech recognition and language translation
- Generative AI (GenAI): AI techniques that learn from data to create new, unique artifacts resembling but not repeating the original data
- Large Language Models (LLMs): Specialised AI trained on vast amounts of text to understand and generate content, used in tools like ChatGPT, Google Bard and Bing Chat
- Machine Learning (ML): AI where computers automatically find patterns in data or solve problems without explicit programming
- Predictive AI: AI that forecasts future patterns, predictions, and trends
AI in Business: Cyber Security Considerations
Since the release of ChatGPT in 2022, the use of GenAI and LLMs has become widespread for streamlining business operations, introducing time efficiencies, and to input data sources at scale for purposes of analysis and resultant insight into business operations.
However, like with any technology/non-AI system/application, GenAI and LLMs require appropriate cyber security measures based on organisational risk management processes. Unique vulnerabilities to consider include:
- Accuracy: AI is not foolproof – it can provide incorrect responses presented as fact
- Bias: AI responses may appear biased if given leading questions
- Prompt Injection Attacks: If not suitably secured, attackers can manipulate AI (LLMs in particular) inputs to produce unintended outputs, such as offensive content or confidential information release
- Data Poisoning Attacks: If attackers gain access and tamper with the data used to train the AI, the system may produce undesirable outcomes
To mitigate these risks, AI implementation should follow a secure software development lifecycle (Secure SDLC) to ensure the application code is tested for vulnerabilities from design to deployment. Additionally, we always recommend conducting a full risk assessment and providing policy guidance before allowing users to utilise tools such as ChatGPT. Shadow AI introduces a whole host of new cyber security challenges!
How Can We Help?
We can strengthen your digital defences and enhance your cyber security strategies by understanding and addressing the challenges associated with AI and other emerging technologies.
Stay protected and informed. Speak to your account manager today to discover how we can help you implement cutting-edge cyber security solutions tailored to your business needs.