missing links

Over the past couple of months I’ve been preparing a talk entitled ‘Beer, Bacon and Blue Teaming’. It covers building solid defense on a shoestring budget, with an outline along the lines of:

  • OSINT sources.
  • Spam traps.
  • Honeypots.
  • Automated analysis.
  • Dissecting LuminosityLink:
    • IDS.
    • Sysmon.
    • Configuration extraction.
    • Yara rule creation.

In this short blog post I’ll run over a few of the items in brief detail. Continue reading “missing links”

revisiting the watering hole – again

One of the most popular developments of mine, and in my opinion one of the most effective at what it is aimed to do, is the Pond Security Awareness Framework. In the last post I made regarding it, I had introduced the concept of mutliple campaigns and collaboration via SignalR. Multiple users could work on the same campaign, saving and resuming work on them whenever they please. My problem was, however, that the attack vector was still limited to email – I wanted more. So, I have introduced an API, meaning that any method of attack can be used where code can be executed to POST to a URL.

Further to this, the code has now – finally – been made public under the Apache license.
Continue reading “revisiting the watering hole – again”

revisiting the watering hole

Earlier this year, in May, I wrote about a phishing framework that I used to mount a company wide phishing campaign – for testing purposes, of course. Since then several other companies have used the application for the same purpose, and there has been some real interest in it within the local security community. This interest sparked a revival of the project: complete rearchitecture, redesign and rethinking of how it should be used. Whilst the code isn’t quite ready for public release and I haven’t started on documentation, in this post I’ll outline the new architecture and demonstrate a campaign from beginning to end. Continue reading “revisiting the watering hole”

lord of the flies

The popularity of a security consultant within a development oriented organisation is most certainly bi-polar. Occasionally, after thwarting a breach or reporting a bug directly to a developer rather than through JIRA (where it would expose their incompetence), we are gifted the opportunity of feeling a little more human and receive – for once – some warmth from our fellow compatriots. Most of the time, however, we’re that troll under the bridge pulling at peoples ankles, standing in their way and grunting orders at them as they try to cross. On the other hand, the reality is that the very nature of our jobs is to protect and help others, and to do so requires a solid understanding of all layers of the stack. So, for the most part we’re not grunting orders whilst having no clue as to what we’re talking about: we’re making well informed observations that warrant attention.

Many a dev shop I’ve stepped into can be likened to the Lord of the Flies, where the developers are so focused on design, functionality and UX that they lose touch with what really makes a product: engineering. Design may sell a product, but without solid engineering it will almost certainly see a short lifespan, significant downtime, no sales via word-of-mouth and/or reputational harm. What I’ve been trying to teach developers is that security not only has the function of protecting data and users, but it also promotes robust engineering. Making security a priority throughout the entire design and development process ultimately forms a more reliable product that will require less ongoing downtime to patch bugs – allowing developers to focus more on functionality and design during post go-live sprints. Think of it this way: if you cut corners when constructing the foundations and frame of a house, only to later discover that there is a critical issue with either, you’re going to have one hell of a time trying to address the issue without seriously impacting it’s occupants. So, the key to forcing a shift toward secure development practices is education: knowing vulnerabilities and their impact, coding securely, testing and how to efficiently integrate standards into projects. An effective tool to illustrate this and to get developers adopting more of a hacker mindset are HackMe applications. Previously I developed and released vuln_demo, however I’ve recently ended this project and created FooBl0g. Continue reading “lord of the flies”

burn after reading

On Tuesday evening I delivered a presentation to a fairly diverse group comprised of local IT business owners and staff – the largest of it’s kind in my city. The subject of it was incident response: hiring the right staff and educating existing staff, designing networks that reduce the impact of breaches, log correlation and malware analysis, etc. One point that I made, which visibly provoked deep thought throughout much of the audience, was that shifting infrastructure into the cloud moves our data further out of the reach of security controls and into the hands of potentially untrustworthy and incompetent 3rd parties. You may say: “well, duh”. Trust me, it too came as a surprise to me that this would cause distress for people, as in my mind it’s absolute common sense – but obviously not. The concern of outsourcing security was, however, one of the reasons that I chose to introduce a policy within my workplace that prohibits the transmission of confidential data (e.g. credentials) via email or SMS, as data retention and the security of cloud and telco services is at times somewhat questionable. So, you want to eliminate the storage of confidential information in any such outsourced services. Faced with having to devise a solution that is usable by even the most technically inept, I decided to build upon a concept already used by some online services: self-destructing, encrypted messages.

direct dealings

A client of mine operates a fairly large trading website that allows users to upload media (e.g. images, videos and documents) to accompany their listings, and respondent’s to do the same with their responses. The uploaded files are stored on disk, i.e. not in a database. Following some operational re-architecture, it has also been decided that the architecture and development of the application will also be tidied up a bit. As a good portion of the application is already in Amazon, it has been suggested that one option is to store the flatfiles in S3 – cue my input on how this could be achieved. Continue reading “direct dealings”

zero trust

Prior to the incalculably game-changing Snowden revelations of 2013 privacy was something assumed by most as being guaranteed. Following them, not so much. The credibility of cryptographic algorithms came into dispute, as did the credibility of most major telco’s, service providers and equipment manufacturers (Facebook, Google, Microsoft, Cisco etc).

A project of mine, now dubbed Akelarre (the meeting place of witches), began several months ago as an experiment with a real-time server>client communication library named SignalR built on top of PostgreSQL.


the main page

I wanted to develop a simple group chat application. As most projects tend to, this quickly spiraled out of control as I came up with more and more ideas: server-side encryption, per-group encryption, secure invites and secure file uploads… but then I thought: but what if the admin goes rogue or the database and key(s) are compromised?

Enter the concept of host-proof applications, where the host cannot be trusted with sensitive data.

Note: This project is no longer maintained. I may revisit it at a later date.
Continue reading “zero trust”