The past 12 months have seen 6 New Zealand security professionals embark on a volunteer project that is focused on finding security issues that concern New Zealand businesses and our address space. This group has become known as “Threat Safari”.

A large percentage of the findings that we’ve made are phishing sites and dumped credentials, but we’ve also made some significant achievements in discovering and aiding the resolution of botnet and malware infrastructure in New Zealand.

This post was originally published in May of this year, and has been updated and republished to accompany my purplecon talk: Roast Criminals, Not Marshmallows.

lion_hiding

About Threat Safari

The Threat Safari project evolved from two smaller projects:

Between my existing operation and the general goal of the Slack channel we found a common purpose of helping find issues in our country and initiate resolution of them through CERT NZ and private sector contacts. The group holds a shared belief that:

  • Not enough is being done to help victims of compromise.
  • Victims deserve better than discussion of their compromise on public forums.

Core to the capabilities of the group is Safari Guide, a threat intelligence platform that grew out of my malware tracker. It is capable of discovering a variety of New Zealand specific security issues, including:

  • Blacklisted hosts.
  • Malware C2 servers.
  • Malware hosts.
  • Phishing sites.
  • Vulnerable devices and sites.

This is done by utilising a number of sources:

  • ~60 feeds.
  • Intel communities.
  • Sandboxes.
  • Honeypots and spamtraps.
  • Data enrichment (using VirusTotal and ThreatMiner).
  • Shodan and Google Dorking.
  • Other methods/sources.

The general enrichment process is detailed in an earlier post, “making the most of it”:

The approach I have taken is to stretch a single indicator as far as possible by pivoting between multiple data sources, so that could – for example – follow a process such as:

C2 checkin or beacon alert in Payload Security feed > extract domains (a blacklist based crawl would start at this point) > use passive DNS service to identify recently utilised IP’s > search on VirusTotal for samples that have recently communicated with the IP’s > submit C2 data to the tracker > search on VirusTotal for URL’s associated with the IP’s > fetch samples > Yara scan files > submit sample data to tracker.

… but in the case of Safari Guide there is also assessment of file hashes for communicating samples and sandbox analysis of new samples.

In order for some of the “other” methods of discovery to remain effective the platform must remain closed source, however some of the key open feeds and enrichment methods are available in my open source project ph0neutria.

Learnings

Building a (new) threat intelligence platform might be a waste of time…

Right from the inception of ph0neutria I’ve had a dislike of repeating what others are already doing, and have aimed to generate a more unique dataset, even if it is built using mostly OSINT. The likes of MISP and Cymon already do a good job of straight feed aggregation, so I had to do better.

For example, when you discover a source of data, look at what inputs and outputs it has, and how you can leverage existing data to create more output or what other sources you can feed those outputs into. An an idea of how far a single observable can be stretched, have a look at VirusTotal Graph:

vt_graph_hdc

You could take that single URL, or dig a bit deeper and find a whole lot more.

Similarly, think carefully about where you place your honeypot sensors. While AWS, Linode and DigitalOcean are easy to deploy to, honeypots deployed to them without much thought receive the same internet background noise, and this data is freely available and already being published by many people:

telnet_honeypot_1011

Deploy them somewhere that has juicier context, like in an internet facing segment of a high value organisation, and you’ll naturally get better quality results. This isn’t to say a honeypot in DigitalOcean couldn’t attract human attackers, you just need to work a little harder to draw them toward it (e.g. configure a common subdomain of an organisation to point to it).

Context also matters a great deal when creating spamtraps. It can take many years and a lot of work for a fresh domain to be pulled into malicious spam campaigns, so beginning with an expired domain that has a lot of history gives you a great head start and can inherit associations that only a fully functional organisation can develop.

Infrastructure gets re-used (and shouldn’t)…

Taking down a malicious artifact isn’t enough. Whether it be a VPS dedicated to hosting phishing kits, an unpatched WordPress instance that has had malware uploaded to it via a file-upload vulnerability, or a shared host that has been owned inside out – you’re likely to see it pop up on your radar again if all is done is remove the offending content.

If it looks like an account is being used for the sole purpose of hosting kits: disable it, sinkhole the domain(s), block the payment method and assess whether you see the account IP’s attempting to sign up again. If a site or host is owned: eliminate the root cause, address any fallout and monitor the effectiveness of your mitigations. Unsure of how to go about this? PagerDuty has some excellent documentation to help guide you, and you can supplement this with some playbooks.

In light of this, as a hunter it is very beneficial to employ a concept of ‘monitored hosts’ in your operation. Don’t wait for specific items to pop up in feeds: if you know a host is likely to be a problem, go out there and gather the data yourself. Regularly feed historical data back into your pipeline to determine whether new data concerning those hosts exists – VirusTotal and ThreatMiner are excellent for this.

Shared hosting providers need to up their game…

There is a handful of hosting providers who form a very high percentage of our findings. Takedown requests typically get ignored, repeat offenders do not appear to get their accounts disabled, vulnerable CMS’s do not get patched and the root cause of host compromise does not ever get addressed. Further to these issues, it’s incredibly rare to see hosting providers employ any sort of proactive assessment of what’s happening in their address space – even following supposed resolution of a compromise. Several NZ web hosts have remained on blacklist’s for as long as we’ve been monitoring them, which is potentially impacting hundreds of businesses hosted under these addresses.

I’m yet to determine whether this is the result of negligence or a lack of understanding on how to deal with these problems, but it’s very clear that many hosting providers need to up their game and be more responsive to what is very well communicated advice. There are many number of free services that they could use to detect issues sitting right under their noses:

The CERT NZ Critical Controls could also help put suppliers in a better position to prevent and respond to issues, and it would be in their best interests to educate their customers on how to do the same. If customers of hosting providers were better educated on operating securely they would also be more likely to question irresponsible hosts.

Phishing kits are not very covert…

The Safari Guide platform is able to reliably and automatically identify phishing activity by assessing:

  • Similarity scoring of domains to brands of interest.
  • Inclusion of brand strings in the URL.
  • Inclusion of brand strings or domains in the page source.
  • Presence of phishing kit elements (e.g. request for username, password, email and payment card data).
  • Inclusion of the brand logo in the page (where the domain owner isn’t the brand owner).

What really makes a site malicious? Very often people have relied on observables like domains as an indicator of maliciousness, but a large amount (or the bulk) of the malicious artifacts I discover are hosted under compromised sites where the domain is unrelated.

Sharing data is essential…

So long as phishing produces victims it’ll continue to be a method that criminals turn to. Nobody is going to shun reliable income. Reporting phishing through the likes of PhishTank, where verified entries are ingested by a handful of antivirus engines, who share with others, and Google SafeBrowsing is an effective method of having content blocked on endpoints while waiting for hosting providers to eliminate a problem at it’s source.

Multi-factor authentication is essential for online survival…

Few kits we’ve found have exhibited the ability (or intention) to harvest MFA tokens. In my experience serving as both the red and blue team for MSP’s I can fairly confidently say it’s difficult to predict who will fall victim to phishing. I’ve seen a user with 40+ years of experience with computers lose their credentials to a page that was flagged by someone who is straight out of college. Similarly, that same person has also entered credentials into a page even after warnings from browser protections told them not to. The commonality between all incidents was that MFA prevented the stolen credentials from being abused (along with having a company funded password manager that discouraged password reuse, which I guess is also a valid point to make here.

If you want results you need to seek them…

We’ve found distributing phishing URL’s through channels like mailing lists to be an inefficient way to achieve a resolution. There will of course be exceptions, but if you want actual response to take place then you need to direct the data at the people who are able to execute that response. Most major organisations will have a contact to report phishing to, and they’ll in turn have contacts within hosting providers, browser and security vendors to quickly mitigate and resolve a matter. Where an organisation doesn’t have such a contact, the relevant CERT will have procedures in place and the authority to deal with a matter as their legislation permits – and this also isolates you from the backlash that some companies tend to throw at security researchers.

Use protection…

Believe it or not, criminals aren’t very nice people. If you do not wish to get harassed, doxxed or threatened by actors then you need to operate under well constructed and maintained aliases. These aliases must be well compartmentalised from your real identity, and you must realise:

There is no room for cross-contamination between aliases.

Rule: If you cannot comfortably burn an entire alias and all associated accounts and devices then it’s not suitable for use. Start from scratch and try harder.

At a minimum I’d expect unique (per alias):

  • Character: Unique profile (name, appearance, background etc).
  • Hardware and Network: Whonix workstation.
  • Email: An email address per alias. Disposable addresses for one-off.
  • Payment: Dedicated disposable VISA (e.g. Prezzy Card).
  • Phone: “Dumb” burner phone per alias.
  • Address: Fake it.

Furthermore, keep in mind:

  • Your targets may be watching you. Do not openly disclose alias details or tradecraft (in particular methods of detection that if known will compromise your ability to gather data).
  • Samples may have ID’s tagged to them that indicate who the recipient of it was. Obfuscate any such details prior to sharing.
  • Data uploaded to services like VirusTotal can be viewed by anyone with the right associations or enough money. Triage samples before uploading. You don’t want to be the initiator of a data breach for your employer.

For more information on this subject I’d recommend the following resources:

For specific tooling check out Privacy Tools.

Resources

Below are some links for tools or services included in my talk:

Communities:

My resources:

Honeypot tools:

Data sources:

Analysis tools:

OPSEC supporting tools:

General resources:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s