End to End Security

Only as strong as the weakest link.

Security, by it’s design, is only as strong as the weakest link. For example, before allowing an acquisition company to connect to your network, the acquisition must have it’s security posture brought to your standards.

Security needs to be engrained in every part of the infrastructure, from servers and containers to the way applications are written. Things like MFA and network segmentation are just a start.

Security is more than hiding your company by a firewall. It’s a framework of controls and policies to trust, but validate. Having a policy with no controls is not ideal, but having controls with no policy is equally as bad. I’ve had experience both writing controls and policy along with building systems to enforce those policies utilizing next generation hardware and software, including EDR platforms, packet brokers / capture devices and analytics platforms, SEIM and log management, web filters and DLP appliances.

It’s not to say that a firewall is not a valid solution; it is – but it takes more than firewall to protect your business.

Security Experience Overview

Typically I’ve been at the forefront at two parts of security; the higher level design side and on some of the policy and controls decision making and implementation phases.

  • Firewall / NGFW Design, Rules and Implementation (Cisco, Palo Alto, SonicWall, Fortinet, Watchguard)
  • Zero Trust / Least Privilege
  • Web Filtering (Palo Alto, BlueCoat, ZScaler)
  • Various Frameworks – ISO, NIST, CIS Controls
  • Aware of country specific regulations (GDPR)
  • Vendor Relations – staying up to date with vendors and their offerings
  • High Risk Activity Restriction (Email, Social Media posting, IRC / Web Chat, etc…)
  • IDS / IPS Systems (Dell SecureWorks, Palo Alto, Watchguard)
  • SASE Platforms and Architecture
  • Exposure to various Compliance Controls (SOX, HIPAA, PCI, SOC 1 / 2, SSAE 18)
  • Policy Based Routing (PBR) for specific subnets
  • EDR Policy Design and Rollout (Microsoft Defender for Endpoint / ATP, Carbon Black, McAfee, Watchguard EPDR)
  • Patching Standards, Platforms and Schedules (Red Hat Satellite, Puppet, WSUS, SCCM, PDQ)
  • PKI Management (SSL Certificates – Entrust, Keyfactor, Godaddy, Microsoft Certificate Authority, EJBCA, Digicert)
  • Remote Access Technologies, along with Policies and Controls (VDI, VPN)
  • Data Encryption – At Rest / In Flight
  • Encryption Technologies and Algorithms (TLSv1.2 / TLSv1.3 / HTTP2 / HTTP3, AES, RSA, etc)
  • Network Segmentation and Micro Segmentation (Routing, Firewall as a Gatway
  • Packet Capture and Analysis (one offs and platforms like Extrahop)
  • Application level security (HTTPS, Secure APIs, Proxying, F5)
  • Server, Endpoint and Mobile hardening (Typically using CIS Benchmarks)
  • Compliance (SOX, SOC, NIST-800-171, NIST-800-53 / CSF)
  • PII, MNPII, Personal Information handling, storage and controls
  • DLP – Data Loss Prevention Policies and Systems (Zscaler)
  • CVE / CVSS Monitoring, Impact Analysis and Remediation
  • MACSEC / IPSEC
  • Identity Management and Multifactor Authentication (MFA – Okta, Duo, Red Hat Identity Manager, Cisco ISE)
  • Logging and SEIM (Splunk primarily, but have used other log aggregators and analytics platforms)

Real World Experience – Example

At one point in a previous role, the company that I worked for was in need of a web filtering solution – it was not really to restrict the users access to the web, but to stop any kind of egress network traffic that may be destined to any kind of “bad” destination, even if the egress traffic was internally sourced (meaning a user was attempting to get to the location on the internet). To complete this, there was first communication to the business leadership about the requirement being set forth and several meetings about how we would tackle it. I tend to take a platform agnostic approach while in the “conception” stage; at that point, the vendor doesn’t really matter, the requirements do – we then investigate the market and choose a vendor that can fill those requirements – so to start with, we did a lot of requirement gathering. Once the decision was made that we would do it, we needed to figure out what we were going to limit access to – not only malicious URLS and IPs, but also what was deemed “risky” traffic – things like Torrents, TOR exit nodes, C2 sites as well as some other categories that have no place in our workspace including things like illegal drugs or illegal markets, pornography, known phishing sites or known compromised web URLs.

This later evolved in to other controls and policies – for example, not allowing the sending of private email from within the company’s network. Communication was needed to the whole company, starting with upper level management as this was a large cultural shift for the company at the time; it was going from a fairly unrestricted internet access to fairly controlled internet access.

Web filtering works by intercepting the HTTP request, interpreting it, and then making that request on behalf of the client, inspecting the request, inspecting the result and then returning it to the client. Some things, via HTTPS, are encrypted very early on in an HTTP session – for example, an HTTP header that contains the URL the user is going to. To get around this, we also had to implement SSL Interception via these appliances for further details about the sites that were being visited. This also allowed us to control certain HTTP methods to certain URLS (for example, blocking an HTTP “POST” to something like gmail.com).

Ultimately, we ended up blocking malicious URLS and IP Addresses which greatly increased the security posture of the company – and it paid off fairly quickly from an incident perspective; security incidents went down fairly steeply.

To first implement this, there was a lot of data gathering – we needed to see what would be blocked if we just dropped this appliance in and starting blocking traffic and whether or not this would negatively affect business. To do this, we ran these appliances in a sort of “observe” mode; we only watched the traffic that went by – we did not actively block any of it. From there, we logged all this traffic and analyzed it against the rules we were planning on implementing. Some of this was traffic was immediately acceptable to block with no impact, but other traffic that had a legitimate business purpose would also blocked – in those cases, we needed to assess why it was required and potentially create an exception.

My role in this was to gather requirements, assess what these appliances could do that would fit those requirements, design the solution for several different types of architectures (Campus and Datacenter type environments and networks; each used slightly different kit), build the policies and control language, and ultimately assist the engineering team implement it – from the QA / UAT portion all the way to applying the solution in various production environments. Additionally, part of my role was to assist and train the operations team on how to properly maintain these and apply new rules or exceptions as they would come up.