Breaking

Back to the roots

Finding exploitable vulnerabilities is getting harder. This statement of Dennis Fisher published on Kaspersky’s Threatpost blog summarizes a trend in the development lifecycle of software . The last published vulnerabilities that were gaining some attention in the public had all one thing in common, they were quite hard to exploit. The so called jailbreakme vulnerability was based on several different vulnerabilities that had to be chained together to break out of the iPhone sandbox, escalate its privileges and run arbitrary code. Modern software and especially modern operating systems are more secure, they contain less software flaws and more protection features that make reliable exploitation a big problem that can only be solved by very skilled hackers. Decades ago it was just like this, but intelligent tools and sharing of the needed knowledge enabled even low skilled people to develop working exploits and attack vulnerable systems. Nowadays we are going back to the roots where only a few very knowledgeable people are able to circumvent modern security controls, but that doesn’t mean that all problems are gone. Attackers are moving to design flaws like the DLL highjacking problem, so only the class of attacks is changing from the old school memory corruption vulnerabilities to logical flaws that still can be exploited easily. But the number of exploitable vulnerabilities is decreasing, so this might be a sign that we are on the right way to develop reliable and secure systems and that developing companies are adopting Microsofts Secure Development Lifecycle (SDL) to produce more secure software. As stated in my previous blogpost the protection features are available, but not used very often. But if they are used and if the developers are strictly following the recommendations of the SDL, this trend of “harder to exploit vulnerabilities” proves that it can be a success story to do so.

Have a bug-free day 😉
Michael

Continue reading
Misc

Intel’s Known Good Approach — Chances for a Paradigm Shift?

During the keynote of the Intel Developer Forum, Intel’s CEO Paul Otellini explained their motivation for the acquisition of McAfee. Basically, Intel wants to provide a possibility to shift computer security from a known bad model to something that is a known good model.

Coming back to some of our recent blog posts, we think that a reliable and working approach to implement application whitelisting would increase security in corporate environments — especially when thinking of the latest vulnerabilities with exploit code in the wild that could not be catched up by any AV solution. As covered by this article, the possibility that such an approach succeeds depends heavily on the critical mass that would use it. The widespread x86 architecture therefore is the perfect plattform for accomplishing a widely used known good m Continue reading “Intel’s Known Good Approach — Chances for a Paradigm Shift?”

Continue reading
Breaking

MS10-063, Prevention

One of the four vulnerabilities rated “critical” from yesterday’s MS patchday, that is MS10-063, has an interesting “Workarounds” section as for MS Internet Explorer. There it’s stated:

“Disabling the support for the parsing of embedded fonts in Internet Explorer prevents this application from being used as an attack vector.”

which, according to the advisory, should/can be done by setting the “Font Downloading” parameter to “Disable”.

Which is exactly what this document suggests. So taking a preventive approach, once more, might have saved some concerns (“Will we be targeted by this one”) and patch/testing time…

Have a great day,

Enno

Continue reading
Misc

That “new worm”…

Recently I noticed this news titled “New email worm on the move”. At roughly the same time I received an email from a senior security responsible from a large customer asking for mitigation advice as they got “hit pretty hard” (by this exact piece of malware).
Given I’m mainly an infrastructure and architecture guy usually I’m not too involved in malware protection stuff (besides my continuous ranting that – from an architectural point of view – endpoint based antivirus has a bad security benefit vs. capex/opex ratio). So I’m by no means an expert in this field. Still I keep scratching my head when I read the associated announcements (like this, this or this) from major “antivirus”, “malware protection” or “endpoint security” vendors – to save typing, in the remainder of the post I call them SNAKE vendors (where “SNAKE” stands for “Smart Nimble APT Kombat Execution”… or sth equally ingenious of the valued reader’s choice… 😉

The following (not too) heretical questions come to mind:

a) What’s the corporate need to allow downloading .scr files at all? Maybe I’m missing sth here or I’m just not creative enough but I (still) don’t get it. Why not block .scr at the network boundaries at all?
[yes, I know, there’s no such thing like “well-defined network boundaries” any more, but here we’re talking about “HTTP based downloads” which happen to pass through – a few – centralized points in quite some environments].

a1) So, maybe blocking downloads of .scr files (as this document recommends, funnily enough together with the recommendation to “filter the URL” on gateways… which really seems an operationally feasible thing for complex environments… and a very effective one, for future malware, too ;-)) might be a viable mitigation path.
In my naïve world the approach of just allowing a certain (“positive”) set of file/MIME types for download would be even better, wouldn’t it?

This reminds me of a consulting project we did for a mid-sized bank (20K users) some years ago. They brought us in to evaluate options to increase their “malware protection stance” and we finally recommended a set of policy and gateway configuration adjustments (instead of buying a third commercial antimalware software which they had initially planned). Part of our recommendations was to restrict the file types to be accepted as email attachments. For a certain file type (from the MS Office family and known as a common malware spread vector at the time) they strongly resisted, stating “We need to allow this, our customers regularly send us documents of this type”. We then suggested monitoring the use of various filetypes-in-question for some time and it turned out that for this specific type they received three (in numbers: 3) legitimate emails within a six month period…

b) In their mentioned announcements all major vendors boast disposing of “updated signatures providing total protection” for this piece of malware.
Hmm… again, very naïvely, I might ask: so why did our customer get “hit pretty hard” (and, following the press, other organizations as well)? They are not a small shop (actually they’re one of the 50 largest corporations in the world), there’s a lot of smart people working in the infosec space over there and – of course! – they run  one of the main “best of breed” antimalware solutions on their desktops.
So why did they get hit? I leave the answer to the reader… just a hint: operational aspects might play a role, as always.

This brings me directly to the next question

c) Trend Micro write in their blog

“Upon further investigation, we found that the malware used for this attack was just an unpacked version of a file that we already detected as WORM_AUTORUN.NAD. It is possible that the cybercriminals behind this attack got hold of the code for WORM_AUTORUN.NAD and modified it for their usage.”

Indeed, looking at this entry in Microsoft’s malware encyclopedia from august 19th there are remarkable similarities.

So, dear SNAKE vendors: do I get it correctly that (most of) you need a new signature when there’s an unpacked version of some malicious piece of code, as opposed to a packed version (of the same code)?
Seems quite a difficult exercise for all those super-smart heuristic adaptive engines … in 2010…
Sorry, guys, how crazy is this? And it seems the stuff was initially observed back in july.
[did you note that they don’t even feel embarrased by admitting this, but proudly display this as a result of their research, which of course takes place in the best interest of their valued customers?]

For completeness’ sake it should be mentioned that this piece of malware (no, I won’t rant on the fact that – still, in 2010 – it seems not possible to have a common naming scheme amongst vendors) performs, amongst others, the following actions on an infected machine:
– turning of security services.
– modification of some security-relevant registry keys.
– sharing system folders.

On most Windows systems all those actions can only be performed by users… with administrative privileges…
Overall, this “classic piece of worm” might remind us, that maybe effective desktop protection should be achieved by

– controlling/restricting which types of code and data to bring into a given environment.
– or, at least, _where_ to get executable (types of) code/data from.
– which executables to run on a corporate machine at all (yes, I’m talking about application whitelisting here ;-).
– reflecting on the need of administrative privileges.

and _not_ by still spending even more money for SNAKE oil.

I renew my plea from this post:
So, please please please, just take a small amount (e.g. 1%) of the yearly budget you spend on antimalware software/support/operational cost, get a student intern in and have her start testing application whitelisting on some typical corporate desktops. This might contribute to a bit more sustainable security in your environment, one day in the future.
 

Have a great day,

Enno

Continue reading