What image pops into your head when you hear the words ‘cyber attack’?
A picture of someone wearing a hoodie, with their face obscured, hunched over a computer in a basement?
Yet your biggest security risk isn’t some unknown threat actor, but your staff – the insider threat.
By the nature of their employment, staff require access to your sensitive information and the systems that hold it. Though prudent organisations deploy access control, this doesn’t change the fact that they must implicitly trust staff.
Even without employees going rogue, this gives rise to significant risk for organisations:
- According to Verizon’s 2024 Data Breach Investigations Report, 68% of data breaches involve a “non-malicious human element”, like staff clicking a phishing link.
- Zscaler found that phishing attacks surged by 58.2% in 2023.
How can organisations protect themselves? And what is the value of simulated phishing?
Penetration tester Hilmi Tin explains.
In this interview
- Real-life phishing examples
- What is simulated phishing?
- How to recognise a phishing attack
- How does a social engineering pen test work?
- Get a fresh spin on traditional phishing awareness
Real-life phishing examples
Why is phishing such a big security threat?
First, you need to understand the anatomy of an attack.
Many external attacks start by the threat actor distributing emails containing a malicious link or attachment – i.e. with a phishing email. If someone then clicks that link [or downloads the attachment], the target system becomes infected.
This gives the attacker access to further systems on the internal network, enabling them to access and exfiltrate confidential data.
Can you share any real-life examples?
Sure. When giving a demonstration, I usually share a few case studies to help people understand the importance of being alert.
For example, an employee at aerospace parts manufacturer FACC got an email that was supposedly from the CEO but was, in fact, a scam. The email asked the employee to transfer €42 million [about £35 million] for an ‘acquisition project’, which they did.
Crelan Bank suffered a similar scenario not long after, costing it more than $75 million [about £60 million].
There’s also the Uber data breach in 2016. Criminal hackers accessed and stole the personal data of 57 million Uber users and drivers by obtaining an employee phone number and sending them a phishing text with a link to a fake login page.
The employee entered their credentials, which the hacker captured. Then, the hacker used an MFA [multifactor authentication] fatigue technique, sending repeated MFA push notifications to the employee’s phone until they approved the login request.
This gained the hacker access to Uber’s VPN [virtual private network], leading to the infamous data breach.
What is simulated phishing?
What is a simulated phishing programme, and what are its benefits?
A simulated phishing programme is a security exercise where controlled, fake phishing emails are sent to employees to test their ability to recognise and respond to phishing attacks.
This gives organisations insight – based on a real-world simulation – into the effectiveness of their training and overall security programme. It also indicates where extra resources, such as targeted training, may be needed.
The exercise itself may also increase staff awareness, and can be designed to give instant feedback to anyone who engages with the emails. Doing such exercises on a regular basis will also improve your security culture.
Why should organisations consider social engineering penetration testing over cheaper measures like staff awareness training?
One thing I’ve come to realise is that everyone can identify a phishing email.
But people don’t truly understand that attackers are exploiting our psychology, taking advantage of the fact we’re curious, or making clever use of fear factors.
Crucially, the phishing message will create some kind of incentive for you to take quick action. For example, you’re told:
- To click a link to submit documentation supposedly essential to pass your probation; or
- A member of staff has published an ill-informed article, forcing the company to publish a statement, with a link to the statement. That gets a lot of people to click – they immediately think: “What colleague? What article?”
Whatever the specifics, phishing is always designed to bypass your ability to think critically – but had you stopped and thought about it for ten seconds, you’d have realised something was off.
Could you share any other examples?
I remember one client, where the IT manager who approved the template fell victim to the attack himself. He actually clicked the link, even though he’d seen the template in advance!
To me, that’s the ultimate proof that people can identify these templates. The trouble is that psychology comes into play in real-world settings.
Threat actors exploit our psychology – that’s why phishing attacks are so effective. In the case of this client, I was shocked to see this person falling for the ‘attack’ – then again, it was an IT-related email that looked rather serious.
How to recognise a phishing attack
How can people recognise a phishing attack? Particularly when, in an AI-powered environment, ‘clues’ like spelling and grammar errors may become a thing of the past?
If you weren’t expecting the message, and it’s making you feel compelled to take urgent action, that’s a warning sign.
If someone asks me: “How do I know if this is malicious?”, I generally respond: “Were you expecting an email like that?”
People then typically say: “Actually, no.” To which I respond: “Well, there you go.”
What about if your job inherently involves dealing with unexpected, urgent messages?
That’s a trickier scenario – something like a sales team, which tends to get targeted more anyway.
We advise to take ten seconds. That’s all it takes to be able to identify phishing.
When I give presentations or training to those who fell victim to the simulated attack, I show the very templates that tricked them. Interestingly, nearly every user spots the warning signs then.
Someone said: “Look, the image is malformed there!” Someone else asked: “Why does it say ‘billgates@[organisationname].com’?” To which I replied: “Right. And does Bill Gates work for you?” To which they started laughing.
So, again, ten seconds is all it takes!
Finding this interview useful? To get notified of future
Q&As and other free resources like this, subscribe to
our free weekly newsletter: the Security Spotlight.
What does a social engineering penetration test involve?
How does a simulated phishing engagement start?
We always begin with a pre-call with the client, in which we gauge:
- Their staff’s current level of security awareness; and
- The types of phishing email they tend to receive, which we may want to replicate for the test.
Once we have a good idea about the level of awareness, continuing to work closely with the client, we select a template for the simulated attack. These can vary between:
- Highly sophisticated;
- Sophisticated; and
- Unsophisticated.
We also check with the client when most emails are sent, and make sure to send the test during those hours.
How do you choose the correct template?
As a rule of thumb, we opt for highly sophisticated when the organisation has an ongoing training programme.
If they’ve ‘dabbled’ in training, we go for a sophisticated template.
And if they haven’t rolled out training at all, we go for unsophisticated – an email that should only take a second’s glance to make the user go: “That looks dodgy.”
Could you share an example of an engagement? How do you build an ‘attack’?
A recent client opted for a ‘middle tier’ phishing email.
From there, we explored:
- Their partners;
- Their upcoming events; and
- Their ‘trending’ emails – the emails they regularly receive.
We realised that staff regularly received Microsoft security emails, for authentication purposes.
So, we replicated that email, since staff received it so often and had a trust relationship with that template. When we launched the ‘attack’, 90% of the organisation failed the test.
This comes back to my earlier point about how anyone can identify phishing in a training setting, but not necessarily in a real-world scenario, when your psychology is being ‘hacked’.
Do you have any more examples?
Sometimes, we do an ‘onion-layered’ attack, where we send two emails.
First, I send out an unsophisticated phishing email – one I expect anyone to be able to detect.
Then, I screenshot that email, and send out another email, pretending to be IT, saying something like: “This email has been circling around the organisation this morning. Here’s a list of users that have clicked the link.”
That email then includes a link to this list of users. Clicking that link is extremely tempting – you’re exploiting both curiosity and fear.
Here’s another example of a highly effective template:
Again, you’re exploiting certain psychological factors – panic and curiosity.
But there are many other aspects an attacker could exploit, including:
- Fear
- Guilt
- Trust
- Shame
- Threats
- Scarcity
- Urgency
- Authority
- Reciprocity
- Social proof
- Appeal to helpfulness
- FOMO [fear of missing out]
Social engineering and AI
To come back to AI again, as this technology becomes more powerful, can you foresee changes to how social engineering attacks work?
Definitely. Attackers could train an LLM [large language model] on data sets focused on persuasive techniques, psychological triggers and/or manipulation tactics.
This would make the phishing attack even more effective, as the model learns to apply specific tactics that exploit human vulnerabilities.
You could get the AI to, for example, apply a stronger curiosity or fear factor to the message. And it can do so in seconds. That has exponentially exploded the scale of these attacks – you don’t need much skill to create a sophisticated, convincing attack.
In fact, when doing live demonstrations, I construct highly sophisticated phishing emails in seconds. With an international audience, I’ll even get ChatGPT to rapidly translate the email – which, depending on the language, can be a flawless translation.
The audience then says things like: “Oh my word. This is exactly what we’d receive from our colleagues.”
It makes looking out for warning signs like not expecting the message, and feeling compelled to take urgent action, all the more important.
Get a fresh spin on traditional phishing awareness
Our unique simulated phishing programme combines interactive training, simulated phishing attacks and a session with an ethical hacker like Hilmi to significantly improve your resilience to phishing attacks.
Following testing, employees will receive personalised feedback on their vulnerabilities, practical advice on phishing detection and a clear understanding of how to protect your organisation better.
We’ve trialled this programme with various organisations, and the results have been incredible. The live training sessions dramatically reduce the risk of phishing attacks, and end users find the interaction and ability to ask questions invaluable.
About Hilmi Tin
Hilmi is an experienced penetration tester with a proven track record of delivering exceptional results for clients across diverse industries. His expertise spans web application and API testing, infrastructure testing, and social engineering penetration tests.
Hilmi is also passionate about knowledge-sharing. He actively mentors and coaches aspiring penetration testers, helping them navigate and excel in the field.
His influence extends to the corporate sector, where he has led impactful training sessions for major organisations.
We hope you enjoyed this edition of our ‘Expert Insight’ series. We’ll be back soon, chatting to another expert within GRC International Group.
If you’d like to get our latest interviews and resources straight to your inbox, subscribe to our free Security Spotlight newsletter.
Alternatively, explore our full index of interviews here.
The post Meet the Hacker: How Simulated Phishing Addresses Your Biggest Security Risk appeared first on IT Governance UK Blog.