TAG | Remote Access
0 Comments | Posted by Enno Rey
I’m currently involved in creating an up to date approach to handling external connections (read: temporary/permanent connections with external parties like business partners) of a very large enterprise. Currently they have sth along the lines of: “there’s two types of external connections, trusted and untrusted. the untrusted ones have to be connected by means of a double staged firewall”.
Which – of course – doesn’t work at all in a VUCA world, for a number of reasons (the demarcation between trusted and untrusted is quite unclear – just think of mergers & acquisitions –; “business doesn’t like implementing 2-staged firewalls in some part of the world where they just signed the memorandum for a joint venture to build windmills in the desert”; firewalls might not be the appropriate control for quite some threats anyway – see for example slide 46 of this presentation– and so on). Not to mention that I personally think that the “double staged firewall” thing is based on an outdated threat model, in particular when implemented with two different vendors (for the simple reason that the added operational effort usually is not worth the added security benefit. see this post for some discussion of the concept of “operational feasibility”…).
Back to the initial point: the approach to be developed is meant to work on the basis of several types of remote connections which each determine associated security controls and other parameters. Which, at the first glance, does not seem overly complicated, but – as always – the devil is in the details.
What to base those categories on: the trust or security level of the other party (called “$OTHER_ORG” in the following) – or just assume they’re all untrusted? The protection needs of the data accessed by $OTHER_ORG? The (network) type of connection or number & type of users (unauthenticated vs. authenticated, many vs. few), the technical characteristics of the services involved (is an outbound Bloomberg link to be handled differently than an inbound connection to some published application, by means of a Citrix Access Gateway? if so, in what way?) etc.
As a start we put together a comprehensive list of questions as for the business partner, the characteristics of the connection and the data accessed and other stuff. These have to be answered by the (“business side”) requestor of an external connection. To give you an idea of the nature of questions here’s the first of those (~ 40 overall) questions:
- Please provide details as for the company type and ownership of $OTHER_ORG.
- More specifically: does $COMPANY hold shares of $OTHER_ORG?
- Who currently manages the IT infrastructure of $OTHER_ORG?
- Does $OTHER_ORG dispose of security relevant (e.g. ISO 27001) certifications or are they willing to provide SAS 70/ISAE 3402/SSAE 16 (“Type 2”) reports?
- What is – from your perspective – $OTHER_ORG’s maturity level as for information security management, processes and overall posture?
- How long will the connection be needed?
- Which $COMPANY resources does $OTHER_ORG need to access?
- Does a risk assessment for the mentioned ($COMPANY) resources exist?
- What is the highest (data) classification level that $OTHER_ORG needs access to?
- What is the highest (data) classification of data stored on systems that $OTHER_ORG accesses by some means (even if this data is not part of the planned access)?
- Will data be accessed/processed that is covered by regulatory frameworks [e.g. Data Protection, PCI, SOX].
- What would – from your perspective – be the impact for $COMPANY in case the data in question was disclosed to unauthorized 3rd parties?
- What would – from your perspective – be the impact for $COMPANY in case the data in question was irreversibly destroyed?
- What would – from your perspective – be the impact for $COMPANY in case the service(s) in question was/were rendered unavailable for a certain time?
We then defined an initial set of “types of connections” that dispose of different characteristics and might be handled with different measures (security controls being a subset of these). These connection types/categories included
- “trusted business partners”/TBP (think of outsourcing partners, with strong mutual contractual controls in place etc.).
- “external business partner”/EBP (this is the kind-of default, “traditional” case of an external connection).
- “mergers & acquisitions [heritage]”/MA (including all those scenarios deriving from M & A, like “we legally own them but don’t really know the security posture of their IT landscape” or “somebody else now legally owns them, but they still need heavy access to our central systems, for the next 24-36 months”).
- “business applications”/BusApp (think of Bloomberg access in finance or chemical databases in certain industry sectors).
- “external associates”/ExtAss (“those three developers from that other organization we collaborate with on developing a new portal for some service, who need access to the project’s subversion system which happens to sit in our network”).
Next we tried to assign a category by analyzing the answers in a “point-based” manner (roughly going like: “in case we own them by 100% give a point for TBP”, “in case the connection is just outbound to a limited set of systems, give a point to BusApp”, “if it’s an inbound connection from less than 10 users, here’s a point for ExtAss” etc.), in an MS Excel sheet containing the questions together with drop-down response fields (plus comments where needed) and some calculation logic embedded in the sheet. This seemed a feasible approach, but reflecting on the actual points and assignment system, we realized that, in the end of the day, all these scenarios can be broken down to three relevant parameters which in turn determine the handling options. These parameters are
- the trustworthiness of some entity (e.g. an organization, a network [segment], some users). pls note that _their trustworthiness_ is the basis for _our trust_ so both terms express sides of the same coin.
- the (threat) exposure of systems and data contained in certain parts of some (own|external) network.
- the protection needs of systems and data contained in certain parts of (usually the “own”/$COMPANY’s) network.
Interestingly enough every complex discussion about isolating/segmenting or – the other side of the story – connecting/consolidating (aka “virtualizing”) systems and networks can be reduced to those three fundamentals, see for example this presentation (and I might discuss, in another post, a datacenter project we’re currently involved in where this “reduction” turned out to be useful as well).
Taking this route greatly facilitates the assignment of both individual connections to a category and sets of potential (generic) controls to the connection type categories, as each answer (to one of those questions) directly influences one of those three parameters (e.g. “we hold more than 50% of their shares” => increase trust; “$OTHER_ORG needs to access some of our systems with high privileges” => increase exposure; “data included that is subject to breach laws” => increase protection need etc.).
Which in turn allows a (potentially weighted) points based approach to identify those connections with many vs. few (trust|exposure|protection need) contributing factors.
More on this including details on the actual calculation approach and the final assignment of a category in the next part of this series which is to be published soon…
Have a great weekend
I’m currently involved in a “Remote Access Security Assessment” and you might be wondering what exactly this means. Well, so did we. At least to some degree (btw: last year we provided some notes on types of security assessments here).
It happens quite often we’re brought into an organization to perform “a security assessment” of “some item” (“our network”, “that new procurement portal”, “the PKI” etc.). It happens as well the customer does not have a very clear idea of the way such an assessment should be carried out (telling us “you are the experts, you should know what to do”). Or the five people from the customer’s side present in the kick-off meeting have five different concepts (ok, four. as one of them only wants “to get that damned assessment done so we can finally go live”) and we end up moderating their arguments on what should be tested, how this should be done, when this is going to happen, which type of report format is needed (obviously, there’s different ones, depending on the goal/scope/methodology of the assessment…) etc.
In such “vague cases” we usually define a set of (~ 10) categories/[security] criteria to be fulfilled by the item in question, based on different sources like the organization’s internal policy framework, potentially the project-to-be-assessed’s security goals, relevant standards and “common security best practices” (often the latter only to a small degree as we have a mostly risk based security approach which does not necessarily fit very well with [assumption-based] “best practices”; more on this philosophical question in yet-another-post ;-))
For example, in the case of a “Firewall & DMZ assessment” these categories/criteria could include stuff like “Dedicated LAN switches” (from $ORG’s internal guidelines), “Logging of denied packets” (same source), “Secure management” (from ISO 18028-3), “Scalability” (based on business needs as for $ITEM) or “Proper vulnerability mgmt” (ISO 27001).
We then perform a combination of assessment methods to obtain the necessary information to evaluate those categories in terms of “fulfilled”, “major/minor non-compliance”, “leads to relevant business risk”, “documentation missing” and the like.
Figuring the individual categories/criteria which are suited to the customer, their needs and the assessment’s (time, budget, political) constraints usually is intellectually challenging and thus “hard work”. Still, this approach (hopefully ;-)) ensures we can “answer the customer’s relevant question[s] in a structured way and with reasonable effort”.
So why do I tell you all this (besides the implied shameless self-plug)? I faced the very situation in the mentioned “Remote Access Security Assessment” and made some observations that I think are worthwhile sharing.
Let’s have a look at the potential sources for the approach laid out above:
a) internal policies: if I told you that “no formal document written, say, in the last five years, covering the topic and/or available to the people engaging us” could be identified, you wouldn’t be surprised, would you? 😉
[ok, I didn’t give you any details as for the customer… but does it really matter? how many organizations do you know which have an up-to-date “remote access policy”… I mean, one that actually provides reasonable guidance besides “all remote access must be secured appropriately” …]
b) standards & “best practices”
To discuss these I’d first like to introduce some level of abstraction shortly – regular readers of this blog might remember my constant endeavor to “bring structure into things”- by breaking down “a remote access solution” to the following generic entities involved:
- network transmission incl. protocols & algorithms
Traditionally there have been 3-4 relevant standard documents in the remote access space, that are ISO 18028-4 Information technology — Security techniques — IT network security — Part 4: Securing remote access (latest version from 2005), NIST SP 800-77 Guide to IPsec VPNs (2005), NIST SP 800-113 Guide to SSL VPNs (2008) and potentially revision 1 of NIST SP 800-46 Guide to Enterprise Telework and Remote Access Security (Rev. 1 published in June 2009).
[Of course, feel free to notify me by PM or comment on this post if you’re aware of others].
Those documents are mainly focused on the first two of the above “abstract elements”, that are gateways and network transmission, which results in lengthy discussions of gateway architectures or protocol specifics and ciphers. It becomes clear that back then the main threats were considered to be “attacks against gateways” or “attacks against network traffic in transit” (whereas today the main risk is probably “unauthorized physical access to endpoint”, after loss or theft of device/smartphone).
But times have changed, and in 2011 I tend to assume that these two elements (gateways, network transmission) are sufficiently well handled & secured in the vast majority of organizations. Ok, just when I write this I somehow hesitate, given I recently worked in an environment where the deployed gateways (based on the – from my perspective – market lead SSL gateway solution) were configured to support stuff like this:
AES256-SHA – 256 Bits
DES-CBC3-SHA – 168 Bits
AES128-SHA – 128 Bits
RC4-SHA – 128 Bits
RC4-MD5 – 128 Bits
DES-CBC-SHA – 56 Bits
EXP-DES-CBC-SHA – 40 Bits
EXP-RC2-CBC-MD5 – 40 Bits
EXP-RC4-MD5 – 40 Bits
AES256-SHA – 256 Bits
DES-CBC3-SHA – 168 Bits
AES128-SHA – 128 Bits
RC4-SHA – 128 Bits
RC4-MD5 – 128 Bits
DES-CBC-SHA – 56 Bits
EXP-DES-CBC-SHA – 40 Bits
EXP-RC2-CBC-MD5 – 40 Bits
EXP-RC4-MD5 – 40 Bits
Not that I regard this as a relevant risk, it’s just interesting to note. I mean, it’s 2011…
Back to track: so, while gateways and protocols might no more constitute areas of “deep concern”, nowadays “the endpoint” certainly does. When the mentioned standards were written, endpoints were supposed to be mostly company managed Windows systems. In the meantime most organizations face an “unmanaged mess”, composed of a growing number of smartphones and tablets of all flavors, some – to some degree – company managed, quite some predominantly “free floating”.
And unfortunately there is – to the best of my knowledge – no current (industry) standard laying out how to handle these from a security perpective. Which in turn means once an “authoritative framework” is needed (be it for policy reasons, be it for assessments using the approach I described above) there’s nothing “to rely on”, but this has to be figured by the individual organization.
To get a clearer picture I went out contacting “some of my ISO colleagues” in the “how do you handle this?” manner. In the following I’ll lay out “the results how mature organizations address the topic”, together with some recommendations from our side, based on our usual approach to balance security and business needs.
In the end of the day the task can be broken down to the question: “which types of endpoints should be allowed which type of access to which (classification level of) data?”.
To answer this one in a comprehensive and consistent way a number of factors has to be taken into account:
- different types of endpoints.
- different types of “access methods”.
- classification of data to be processed.
- potential prerequisites for certain use cases (type of authentication, employee signing an acceptable use policy [AUP], additional [technical] security controls on endpoints and the like).
Classifying types of endpoints
I personally think that the “traditional distinction” between (just) company-managed and non company-managed – which is used for example in ISO 18028-4 and NIST SP 800-113 – does no longer work, but that it makes sense to differentiate roughly between three types of endpoints. To characterize those I quote from a policy I wrote some time ago:
“Company managed device
Any device owned and managed by $COMPANY, e.g. corporate laptops.
Private trustworthy device
Any device that is owned by an individual employee which is used solely by named invidual and which is kept in an appropriate state as for security controls (up-to-date patch level, anti-malware protection and/or firewall software if feasible etc.). It might be (partly) managed by company driven controls (e.g. configuration policies).
If used for local processing of restricted data appropriate encryption technologies protecting said data must be in place.
In no instance shall any competitors, business partners, and/or other business-relevant parties be allowed to physically access such a device, even if such persons are otherwise related to the employee or are personal friends or acquaintances of the employee. Furthermore information classified ‘strictly confidential’ shall never be processed on this type of device.
All other devices, for example systems in a cyber café or devices in an employee’s household which are shared with family members or friends.”
Pls note that both NIST SP 800-46r1 (where the party responsible for the security of the device is taken into account. and this one can be: organization, teleworker, third party.) and Gartner (classifying into “platform”, “application” and “concierge” types of services) use a similar three-fold classification approach.
By “access method” I mean the way the actual data processing is performed. Let’s classify as follows:
- “full network access” by means of a tunneled IP connection, provided by a network stack like piece of code. This is where the traditional full-blown IPsec or SSL VPN clients come into play.
- “application based” access. This includes all HTTP(S) based portals (with MS Outlook Web Access [OWA] being the most prominent example) and the portals provided by typical SSL VPN gateways that enable users to run certain applications (MS Outlook, SAP GUI, File Browsing and the like).
- “restricted application based”: same as above but usually without file exchange between remote network and local system, e.g. OWA without attachments, or remote desktop access without use of local drives, no copy+paste in TS client etc.
- “(just) presentation logic”: here no actual data processing on the endpoint takes place. The best known example is probably Citrix based stuff, in different flavors (ICA client, Citrix Receiver, xd-agent et.al.).
Classification of data to-be-processed
Another aspect to take into account is the classification of the data that is allowed to be processed on a “remote device”. While this may seem a no-brainer at the first glance, in reality this is a tricky one as most organizations do not dispose of a (“widely used in daily practice”) data classification scheme anyway and usually there’s no file (other information entity) attributed meta-information that allows the easy identification of its classification level, together with a subsequent enforcement of technical controls (which, btw, is one of the main reasons why DLP is so hard to implement correctly. which could be discussed in more detail in another post 😉
And, of course, for an employee who has access anyway technically it’s usually easy to process higher classified data “than allowed”. The most common approach addressing this is having the employees siging AUPs prohibiting this/such type of handling, combined with some liability in case of breaches of such data (this approach, again, might be a minefield in itself, depending on the part of the world and associated juridical system you’re [located] in ;-)).
For the following discussion let’s assume that organizations use a scale of four classifications/ classes designated SC0 to SC3 (where SC = “security class” or “security classification”) with 3 being the highest (e.g. “strictly confidential” or “top secret”) and 0 being the lowest (e.g. “public”, “unclassified”).
Prerequisites for certain use cases
Frequently additional organizational or technical prerequisites for certain use cases (e.g. types of devices, potentially in combination with certain access methods) are induced. In this space can be found:
- Acceptable Use Policies (AUPs) prescribing what can or should be done, which lay out “prohibited practices” (that might still technically possible, see above), the handling of backups, the separation of business and private use etc.
- the level of authentication required for certain usage scenarios.
- additional security controls on the endpoint (anti-malware, local firewall, local disk/file/container encryption and the like), with their presence on the endpoint potentially verified by some “host integrity checking” technology, e.g. “UAC” in Juniper space. In this context it should be noted that in most environments UAC or similar approaches from other vendors do not provide an actual security benefit (“enforce that only a trustworthy and secure endpoint can connect”) but simply differentiate between “company managed” and “unmanaged” in the way “look for a certain registry key and if present assume that’s a company managed device” (“no idea if the associated piece of security software is really present, let alone correctly configured, not to mention that it provides any actual security benefit”…).
Putting it all together
Now, putting all this together I came up with the following table. It displays what we feel “is common best security practice” nowadays for handling remote access (endpoint) security, balancing business and security needs:
|Type of device||Classification of data (allowed) to be processed||Allowed access methods||Prerequisites|
|company managed||usually all, some organizations do not allow highest on mobile devices at all||all|
|private trustworthy (which means, as of definition, disposing of appropriate capabilities/controls)||depends. some organizations do not allow highest in this scenario. We recommend: no SC3 here.||usually all, but “application based” preferred over full network connect||MFA, appropriate capabilities, sometimes AUP|
|all (others) including “private” devices without appropriate capabilities (e.g. apps on iPads that do not use the “data protection” feature)||SC3 not allowed at all. In case of SC2…||no data processing on device, only presentation logic||MFA, Employee signed AUP|
|SC0, SC1 only||“restricted application based” recommended|
|all others when gateway deployed sandbox technologies (e.g. Juniper Secure Virtual Storage for Win32 systems) are used.||depends, usually sth in the range SC 0-2, pretty much never SC3. We recommend: no SC3.||“application based”||MFA, Employee signed AUP|
Pls note: evidently, “everything else” (other combinations of devices, access methods or data classification levels to-be-processed) can be implemented as well (and this actually happens in environments I regularly work in). I mean, “infosec follows, supports and enables business”. It’s just – from our perspective – a risk acceptance needed then 😉
So, this is what I finally took as the “baseline to evaluate the actual implementation against” (for the “endpoint” category) in the course of that assessment mentioned in the beginning of the post. Hopefully laying out the sources of input, the classifications and the rationale leading to certain restrictions turns out to be helpful for some of you, dear readers.