building a higher wall

IISFortify is a suite of scripts I produce to optimise the configuration of Windows Schannel and IIS. It bolsters cryptographic standards and HTTP response headers. As my workplace is beginning to dip it’s feet in Server 2016 along with IIS 10, it is about time that the scripts for Server 2012 were updated, along with introducing scripts for Server 2016/Windows 10.
Continue reading “building a higher wall”

lord of the flies

The popularity of a security consultant within a development oriented organisation is most certainly bi-polar. Occasionally, after thwarting a breach or reporting a bug directly to a developer rather than through JIRA (where it would expose their incompetence), we are gifted the opportunity of feeling a little more human and receive – for once – some warmth from our fellow compatriots. Most of the time, however, we’re that troll under the bridge pulling at peoples ankles, standing in their way and grunting orders at them as they try to cross. On the other hand, the reality is that the very nature of our jobs is to protect and help others, and to do so requires a solid understanding of all layers of the stack. So, for the most part we’re not grunting orders whilst having no clue as to what we’re talking about: we’re making well informed observations that warrant attention.

Many a dev shop I’ve stepped into can be likened to the Lord of the Flies, where the developers are so focused on design, functionality and UX that they lose touch with what really makes a product: engineering. Design may sell a product, but without solid engineering it will almost certainly see a short lifespan, significant downtime, no sales via word-of-mouth and/or reputational harm. What I’ve been trying to teach developers is that security not only has the function of protecting data and users, but it also promotes robust engineering. Making security a priority throughout the entire design and development process ultimately forms a more reliable product that will require less ongoing downtime to patch bugs – allowing developers to focus more on functionality and design during post go-live sprints. Think of it this way: if you cut corners when constructing the foundations and frame of a house, only to later discover that there is a critical issue with either, you’re going to have one hell of a time trying to address the issue without seriously impacting it’s occupants. So, the key to forcing a shift toward secure development practices is education: knowing vulnerabilities and their impact, coding securely, testing and how to efficiently integrate standards into projects. An effective tool to illustrate this and to get developers adopting more of a hacker mindset are HackMe applications. Previously I developed and released vuln_demo, however I’ve recently ended this project and created FooBl0g. Continue reading “lord of the flies”

burn after reading

On Tuesday evening I delivered a presentation to a fairly diverse group comprised of local IT business owners and staff – the largest of it’s kind in my city. The subject of it was incident response: hiring the right staff and educating existing staff, designing networks that reduce the impact of breaches, log correlation and malware analysis, etc. One point that I made, which visibly provoked deep thought throughout much of the audience, was that shifting infrastructure into the cloud moves our data further out of the reach of security controls and into the hands of potentially untrustworthy and incompetent 3rd parties. You may say: “well, duh”. Trust me, it too came as a surprise to me that this would cause distress for people, as in my mind it’s absolute common sense – but obviously not. The concern of outsourcing security was, however, one of the reasons that I chose to introduce a policy within my workplace that prohibits the transmission of confidential data (e.g. credentials) via email or SMS, as data retention and the security of cloud and telco services is at times somewhat questionable. So, you want to eliminate the storage of confidential information in any such outsourced services. Faced with having to devise a solution that is usable by even the most technically inept, I decided to build upon a concept already used by some online services: self-destructing, encrypted messages.

out with the old (update)

A very popular post of mine has been ‘out with the old‘, which details a series of scripts for hardening web servers. Since I first posted it there have been a number of changes made to the scripts, following vulnerabilities like POODLE and the recent ‘Bar Mitzvah‘ attack against RC4, so I’ll provide a quick update of those as well as some challenges I’ve encountered.
Continue reading “out with the old (update)”

facade

In 2009 I gave a presentation entitled ‘Human 0-Days’ in which I made two very clear points:

  1. The inherent selfishness of humans is perhaps their most gaping, easily exploitable vulnerability.
  2. An organisations weakest point will always be it’s employees, due to the above.
To illustrate these two points I performed a demonstration of how a rogue wireless access point could be used to both extract confidential information from associated clients and infect them with malware. Whilst the latter task has remained consistently effective over the past few years, the widespread use of SSL/TLS and HTTP Strict Transport Security has made intercepting readable and usable information somewhat more difficult – at least until now…

Continue reading “facade”

when FDE becomes your enemy

After a few months of hacking about in it, this morning I encountered a rather fatal issue with one of my Linux Mint installs which utilises both full disk and home folder encryption. Usually this wouldn’t bother me so much as a fresh install is always nice (particularly since Linux Mint 17.1 is now out), but last night I made some changes to a script and hadn’t yet committed them to my BitBucket repo…

Here’s how I recovered the file.
Continue reading “when FDE becomes your enemy”

out with the old

Up until the Snowden revelations SSL/TLS standards were something that had very little attention paid to them: if it was there, it was doing it’s job – but it turns out that isn’t so. In order to provide clients the greatest level of transport security possible, so to protect against threats like monitoring and protocol/cipher downgrade attacks, a little work is required. In a recent post, Zero Trust, I alluded to a suite of scripts used to harden Microsoft web server configurations, and by using these the level of transport security provided out of a Microsoft web server is considered more than ample by industry standards. When applied a server should score an A on the Qualys SSL Labs test.
Continue reading “out with the old”