PerezBox

Tony Perez On Security, Business, And Life

  • Security
  • Business
  • Life
  • About
  • Contact

Thoughts On Security

Security is in a constant evolving state and keeping up with the various changes can be heard. As my journey continues in the security industry, I will share thoughts and opinions based on personal experiences. Thoughts are mostly around website security, sprinkled with general security concepts as well, but all tailored to the everyday enduser.

Three Things that DNS Outages Teach Administrator

Published in Security on May 5, 2021

Rarely do you wake up thinking to yourself, “wonder how my DNS is doing today?” but I can guarantee it’s been the root cause of one, or two, sleepless nights, “gah, DNS again… grrrrr“.

There is no better example than today’s outage with Register.com and Network Solutions in which customers were told to expect outages ranging 24 – 48 hours. Imagine for a moment, going to your executive team and explaining – “Sorry, our website, our services, and everything in between, will be down for a day or so.”

In the world of system administration, that’s not a fathomable concept.

What can we do to build our own resiliency against issues like this? We all know, these things happen, we don’t have the luxury of pretending they don’t exist, it’s often only a matter of time. So let’s take a look at three things we can do as administrators to improve that resiliency.

Building Resiliency with our Networks

1 – Functional Isolation of the Services. For years I spoke about this when working with organizations to identify and remediate security incidents, managing infrastructure is no different. It’s still a critical pillar of the security triad – Confidentiality, Integrity and Availability.

With domains, separating the services would mean dividing the responsibility of two key components – Registrars and Authoritative DNS. No, they are not the same thing.

Most administrator don’t realize they don’t have to be married together. While a registrar might make it difficult, or work to persuade you, to leverage their Authoritative DNS that is not a requirement. Separating the responsibility between both functions helps remove dependency and improves resiliency.

2 – Failover Authoritative DNS. Any registrar worth their salt will allow you, as the domain owner, to introduce a third-party Auth-DNS service to help in instances like this. A failover Auth-DNS allows you to gracefully respond in the event of an outage.

This outage demonstrates the impacts of not having something like this in place.

In this specific instance, users are stuck until the entire platform is back online. With a failover Auth-DNS service the user would be able to leverage the failover service to mitigate the outage by rerouting traffic accordingly until the main service is back online.

You do this by adding a backup nameserver to your existing registrar, that will replicate your records on a third-party Auth-DNS that sits idle until it’s needed. You never think you need it, until you need it, but when you need it’s already too late to implement.

3 – Automated Failover Detection and Response. The next piece of the puzzle to consider is availability as a whole. Outages occur across the entire stack, and very few administrators realize the real power of Authoritative DNS. Addressing the DNS outage specifically is only the first step, now use this opportunity to think through availability of your domains endpoints (e.g., servers).

Work with an Auth-DNS service provider that leverages the latest in DNS technology to ensure it empowers you to automatically detect and response to endpoint outages. This technology should identify failures in the stack, and programmatically make adjustments to your network to ensure optimal availability without your intervention.

NOC.org an Authoritative DNS Platform

This is a selfless plug, but everything described above is what we built to solve at NOC.org. We built a platform to complement your existing stack, not replace it.

As long as your registrar allows custom nameservers, you have the option to leverage NOC.org for DNS outage resiliency. You also have the option to use our Automated Failover Detection and Recovery feature to move your assets back when the outage is recovered.

NOC Introduces a CDN. Yes, a CDN.

Published in Security on May 4, 2021

NOC.org was built to be a networking platform that aggregates the various tools Daniel and I personally need, and use, to manage our own digital portfolios. It was built by network and system administrators for network and system administrators, so yes, we fully expect a more technical audience to appreciate it’s simplicity and application.

The NOC Foundation

In 2020 we released the platform with two key services we use daily – Authoritative DNS and Asset Monitoring. Via the The authoritative DNS would allow us to leverage smart routing options in the zone files to more proactively manage our traffic, while the asset monitoring would give us continuous visibility on the health of our global network of servers and web properties (it’s the same technology we use to manage our CleanBrowsing network with over 60 different points of presence around the world, serving billions of requests a day).

What made it especially special wasn’t that it told you if it was up, or down, but that we also introduced intelligence into the monitoring. Via the Auth-DNS smart routing, the monitoring platform could proactively identify, and self-heal, when there was degraded performance on the network (Automated Failover Detection and Recovery).

This last feature is critical to how we build global services, with minimal overhead. Automation is critical to this type of thinking, it also has the added benefit of reducing errors.

Tapping into the exponential power of Authoritative DNS also opened an entire world of possibilities for traffic management (concepts like geo-based routing and proximity routing). These introduced new levels of network control that we felt was more relevant to 99% of the web, than what is boasted about by existing CDN providers.

Introducing the NOC CDN

The very cool aspect of the Auth-DNS we built is the ability for any organization to build their own CDN. A network admin could easily leverage the smart routing options we’ve introduced into the zone files and create their own global CDN experience independent of third-parties.

To help demonstrate, we built it ourselves and introduced it as the next logical service on the platform.

Not only did it solve our own personal problems, but we felt it worked to solve the same problem we’ve been trying to tackle the past decade, but utilizing a more modern, eloquent, solution.

Starting over allowed us to look at the problem holistically. Things you can expect with our CDN:

FeatureDescription
Site Type SelectionWhen you first configure the DNS you’ll be asked to make a basic selection (e.g., WordPress, Drupal, etc..). This selection will help drive optimizations designed for your type of platform, and will serve to power future features. :)
CDN TypeBy default, most CDN’s are built on a singular anycast network. For some, the Free service is built on a watered down version of their whole network, while their paid customers will take full advantage of their entire network. In either case, the architecture is singular in nature.

Our deployment allows you to choose the type of CDN your platform needs. Do you need to optimize for US traffic? European traffic? Asian Traffic?
Default Security SettingsWe don’t wait for you to make the right, secure, setting choice. We do it for you, you can expect all traffic to be HTTPS by default, and all insecure requests will be upgraded by default.

We don’t touch cookie settings, those can cause all kinds of problems, but we provide you with an easy to enable it at the network level and how to disable things like FLoC by default.

We also realize that one of the primary reasons CMS based sites get hacked is for improper credential management, leading to credential stuffing attacks (brute force), and so by design we have Protected URLs as a way to choose which URLs should be protected from the public (the beginning of the security hardening you can expect in the future).
Performance & CachingWe support HTTP/2 and TLS/1.3 by default and the foundation caching types (respecting headers and full caching at the edge).

You will also have the option to choose which pages should not be cached for dynamically populated pages.

We focus on the key features that most web properties need. It’s the same technology we use for our own global assets, and figured it might be interesting to others. When you’re thinking of where this CDN fits, think AWS more than CloudFlare.

All the power, at the fraction of a cost. A core theme in everything we build. Give it a spin, and let us know what you think. What do you think is missing? And what could we improve?

Feelings Have No Place in the World of Security

Published in Security on December 29, 2020

The quickest, and arguably most effective way, to compromise an organization is via social engineering. Social engineering in the digital sphere is almost always synonymous with some form of Phishing attack.

Phishing attacks, in it’s simplest form, is the basic act of luring a victim to some bait to achieve some outcome.

Think of the act of fishing. You attach a worm to your hook, and you cast it into the water. You wait, and eventually a hungry fish comes along and says, “hey, lookie here, dinner..”. They bite, and like magic, you have caught your fish.

Read More

standard post icon

Unleashing the Power of Authoritative DNS

Published in Security on August 10, 2020

It was an exceptionally long week, and you managed to get to bed around midnight. You’re a system admin, and at the core of your job is to keep the systems running. Tonight you are on call, if something goes wrong you’re the first to know and are responsible for responding.

You are specifically responsible for ensuring the availability of your companies website. Unlike other sites, your company is running an online commerce store, and service a global audience. The company is yielding $10k in new sales an hour.

It’s imperative that customers can access your site.

It’s 2 am. Your phone starts to light up like a Christmas tree. PagerDuty is having a meltdown and you’re on the receiving end. Your slack notifications are hitting the notification thresholds with Slack. Text messages are pouring in.

Little does the chaos know you forgot to turn on your notifications. There you lay, peacefully, thinking the world is anything but what it is.

The flickering lights, and vibration, finally get the attention of your dog that starts to growl at the inconvenience. The break in the evening noise catches your attention. You open your weary eyes and see your phone dancing in the mist of the evening grog.

It hits you. You grab your phone, it takes a fraction of a second to realize what has happened – you’re down.

Introducing NOC.org

For the better part of 10 years, that’s the world that Daniel and I lived, and continue to live with our projects. We serviced 100’s of thousands of businesses, of all sizes, around the world with incident detection, compromise mitigation services and availability assurances through our CDN / WAF. But through that entire experience, outages happened.. it’s the harsh reality of working on networks.

What we realized is that we needed a better solution for detecting, mitigating and recovering from these availability incidents. That’s why we are introducing NOC.org.

With NOC.org, the scenario above would have been identified and mitigated seamlessly for the user via some of the platforms Smart routing features.

Automating Detection of Incidents, Mitigating Issues, and seamless Recovery

One of the biggest weakest aspects of monitoring availability incidents is that is that it almost always requires manual intervention. Not because technologies don’t exist, but users often lack the knowledge, expertise, to implement the appropriate mitigating controls. In many more instances it’s because the platforms themselves make it too complicated.

NOC.org works to modernize the approach by integrating technologies together. Similar tools, but integrated to help make better decisions for users. If there is one thing we have learned over the years is that the world isn’t lacking in tools, they’re lacking in their ability to parse through the noise and make decisions.

Using Authoritative DNS and the NOC.org smarting routing features, a user is able to create enhanced records. These records allow you to create a fail-over and recovery construct between two nodes that work for you in any incident.

How NOC.org Would Respond to an Availability Incident

In the following illustrations I’ll show you what would have happened in the scenario above:

1 – Normal traffic flow to your web server….

Simplified illustration of Web Traffic hitting a Web Server

2 – NOC.org detects issue with Primary, redirects traffic to Failover within minutes:

NOC.org Detects Issues, Reroutes all traffic

2 – NOC.org detects recovery, and recovers:

NOC.org Automatically Recovers When Outage Mitigated

To do this NOC.org merges different technologies to a) detect issues, and b) automatically respond and recover on behalf of the organization. All through the use of Authoritative DNS and smart routing features.

Binding Monitors with Authoritative DNS Services

One way to tackle availability incidents is to leverage the Domain Name System (DNS), specifically Authoritative DNS (quick primer on DNS).

Authoritative DNS’ are a critical part of how the web works. They contain all the information associated with a domain known as records. These records are stored in a container known as a zone.

Every domain (e.g., perezbox.com) has a set of records. These records tell the web where to find information for a domain.

For example, I leverage tony@perezbox.com as my email. I use what is known as an MX record in my domains zone file to tell the web how to route email to my inbox. Additionally, I have a website that leverages an A record which tells the internet where to find the content of my site. That’s about as deep as I’ll go into zones here, but understand that every domain has one and the piece of the DNS ecosystem that controls these zones is known as the Authoritative DNS.

These zones are typically a feature embedded within a platform like a Registrar or a CDN provider.

Registrars are those that sell you the domain, think of a NameCheap. While a Content Distribution Network (CDN) helps ensure performance and availability, something like our alma matter, Sucuri. Both have their own reasons for why they want to retain a domains zone information, and in doing so treat it as an embedded feature.

Note: Some CDN’s don’t allow you to use other Authoritative DNS providers. While an antiquated approach, this would make it impossible to use with NOC.org.

As the domain owner, you have the ability to choose who you want to manage your zone. You have the ability to move your authoritative DNS to another provider. Doing so will often help provide failover and redundancy, especially when you have your ducks all in one basket – Registrar, DNS, CDN, WAF, etc…

It all works great, until it doesn’t.

Ensuring Business Continuity

Things go down, that is a hard lesson we learned running our own CDN / WAF for years. You can do everything in your power to ensure the service is never disrupted, but Murphy often has other plans. Whether it’s a partner disruption, or something as innocuous as an oversight during a PR.

Leveraging an independent Authoritative DNS can add exponential peace of mind to an organization that depends heavily on their online presence.

NOC.org is here to help provide that. Think of us as a complementary solution, not a replacement.

standard post icon

Content Filtering with CleanBrowsing

Published in Security on July 31, 2020

Content filtering is one of the most under utilized tools in creating safe browsing experiences.

A few years back, while on one of our many walks around the office, Daniel and I found ourselves in a rabbit hole discussing our home networks. Our oldest kids were barely 9 and 10, and our youngest were somewhere in the 4 / 5 age range.

Like many parents, we were slowly succumbing to a world where we as parents had to struggle with the debate of online access for our kids. On one hand, it was so peaceful when they were on their machines, but on the other we struggled with the idea of continuous connection at that age.

This was further compounded by our understanding of the threats online, not just malicious ones. We started to explore threats like content we wouldn’t find appropriate for our own children (e.g., pornography, obscene content).

As we realized that we were caving to the idea that our kids would inevitably be connected to their devices, we set to out to find ways to help ensure they were having safe browsing experiences. So we built a content filtering platform called CleanBrowsing.

What is Content Filtering?

The premise of content filtering is you choose what is, and is not accessible on your internet. We place a lot emphasis on adult content like pornography, but it can be used to help combat online addictions to gambling, gaming, online shopping, and a number of other challenges in the new digital age.

Daniel and I are very different in our ideology and philosophies, so flexibility was a must, which is why being able to create our own content filtering at home was so important. Content filtering allows us to choose what we allow to be seen on our home networks. It applies to all the devices connected to your home router, but can also be configured individually on the devices.

With content filtering, a parent can choose what they do, and do not, want to allow on their home Wi-Fi. This extends not just to your family, but to anyone else visiting your home.

If you’re curious how this works, we offer a free service that any parent can use. We don’t track who is using it, and we don’t know what it is being used for.

FilterDNS IPDescription
Security. IP1: 185.228.168.9 
IP2: 185.228.169.9
Malicious domains blocked (phishing, malware)
Adult. IP1: 185.228.168.10
IP2: 185.228.169.11
Adult domains blocked; Search Engines set to safe mode; +Security Filter
FamilyIP1:185.228.168.168
IP2: 185.228.169.168
Proxies, VPNs & Mixed Adult Content blocked; Youtube to safe mode; +Adult Filter
CleanBrowsing Free Content Filtering Options

The following table provides you an example of how content filtering can be used to filter content based on specific categories:

FilterDescription
Adult & PornographyThis filter blocks access to adult and pornographic content. It includes Escort sites, pornhub and similar domains. It also enforces Safe Browsing on Google and Bing.
Adult Mixed ContentThis filter blocks access to sites that allow pornographic content, while they may also be used for non-adult activities. It includes domains like reddit and some image sharing domains.
Ads & TrackingThis filter block access to Ads and tracking products. It includes Google Ads, Mixpanel and other ad-based products.
TorrentsThis filter blocks access to Torrent sites. It includes The Pirate Bay.
Proxy & VPNsThis filter blocks access to Proxies and VPN products. They are often used to bypass filters.
GamblingThis filter blocks access to online gambling sites.
Social NetworkThis filter blocks access to social networks. It includes Facebook, Twitter and Google Plus.
Small Snippet of the Categories Available for Content Filtering with CleanBrowsing

How Content Filtering Works with CleanBrowsing

Almost every device you interact with – from your refrigerator to you laptop makes use of something known as the Domain Name System (DNS). Think of DNS like the central nervous system of the web.

With DNS your browser (e.g., Chrome, Firefox, Edge [IE]) knows where perezbox.com is on the internet. We built a layer of the DNS construct known as a resolver, and introduced a filtering layer on top of it. This layer allows us to filter content based on your desired preferences.

If you want more control, want the ability to tune the filtering, make use of an additional 16 filters, or a number of cool features like custom blocks, custom allow and block lists, then learn more about the differences between the paid and free service.

Tech Note: Playing with networks can take a bit more time, and can be a bit frustrating, but the end results can be extremely satisfying. Patience is key. If you have questions, just send me a note, I’ll be happy to give you a hand.

To help in the process, we created a free community portal that works to answer as many questions as possible. Let us know what we’re missing.

CleanBrowsing Free Community Forum

PerezBox - Public Wi-Fi | No VPN Requiredstandard post icon

You Don’t Need a VPN

Published in Security on July 13, 2020

A Virtual Private Network (VPN) allows a component from a trusted zone to be accessed from an untrusted zone. This technology enables a user to access company data from Starbuck Wi-Fi. It was a clever way to ensure that individuals that needed access, had access when they need it from wherever they were in a secure manner.

This article explains why VPNs have a purpose, but why the layperson does not need a VPN.

Read More

standard post icon

3 Tips to Secure Your Home Network

Published in Security on March 24, 2020

Whether we like it or not, we have all become the network administrators of our own home networks. As such, our responsibilities extend beyond protecting our families to helping to be good stewards of the networks we’re connecting to (e.g., Work).

To help, here are a few tips that should help you create a safe environment for both you, your loved ones and your company.

1 – Establish a Separate Networks

Over the past 10 years I have spent a great deal of time working with consumers and business around the world helping them prevent and remediate hacks. More often than not, they missed one very simple principle in security – functional isolation.

This simply speaks to the idea of isolating environments to a specified function. In the world I come from, that was ensuring that you didn’t have a a server serving multiple functions (e.g., web server, DB server, File Server, Key Server, etc..).

The same rule applies to your home network. Without getting into the weeds, a very simple trick is to create a dedicated subnet for your use. A very easy way to do this is a purchase a second router, connect it to the one your Internet Service Provider (ISP) provided you and restrict access to that network.

Example of a Home Network with two subnets – one for work and one for the family

In addition to isolating traffic, it will have the added benefit of addressing some of the network saturation you experience if you have kids that love playing video games.

2. Configure the Router

These days I spend a lot of my time helping parents and organizations alike configure their networks through CleanBrowsing. In that process one thing has become apparent, the routers are rarely configured correctly.

A couple of things to consider when configuring your router:

  1. Check if they allow automatic updates. Let’s face it, you’re likely not going to keep up with it. At a minimum, subscribe to get notifications of updates.
  2. Disable the Wireless Protected Setup (WPS) and the Universal Plug and Play (UPnP) protocol. This is especially good if you live in close quarters to someone else (e.g., TownHouse, Apartments, etc..) or if you have kids. :)
  3. Enable the router firewall if it’s available, at a minimum leave the default settings. Only mess with the defaults if you know what you’re doing.
  4. For all that is holy, please update the basic log in credentials and save them in the password vault you’re using.
  5. DNS is a critical piece of the puzzle, learn how I use CleanBrowsing to provide a safe browsing experience at home and how DNS can be used to mitigate security threats.

3. Force Good Online Behavior

One of the biggest contributing factors to small businesses getting hacked is poor online behavior. I encourage you to pay special attention to how you interact online. Here are a few tips to help:

  1. Try to separate activities by browsers if possible. For instance, dedicate one browser to your social sites (e.g., Twitter, Instagram, Facebook, etc..), another for your work related sites, and try to restrict when you access financial institutions.
  2. It’s never a bad time to do an audit of your passwords, are they the same across all your sites (e.g., financial, social, company, etc..)? If so, might be good to invest in randomly generated passwords and password vaults (e.g., LastPass, Dashlane, 1Password) to help you remember them.
  3. Are you leveraging the Multi-Factor Authentication (MFA) features provided by the various platforms you interact with? If not, it would be good time to lean into that. My buddy Jesper has been writing an exceptional series on MFA I encourage you to read if you have time.

Be a Responsible Network Steward

We’re in unchartered waters these days, and each of us have a responsibility to help keep our networks safe from bad actors. This dramatic shift to Work From Home (WFH) has shattered the last form of network perimeter most corporations have been holding onto and we all need to do our part in helping to protect them.

In the process you might find yourself intrigued by what networks have to offer. :)

standard post icon

Mitigating Web Threats with DNS Security | CleanBrowsing

Published in Security on December 24, 2019

On December 18th, DeepInstinct put out a great article outlining the latest Legion Loader campaign. Whether a parent, or organization, this served as a great example to demonstrate the effectiveness of DNS security in mitigating this type of attack.

Legion Loader Campaign

This campaign is suspected of being a dropper-for-hire campaign because of the number of different malware payloads it’s distributing (e.g., info-stealers, backdoors, crypto-miners, etc…).

Read More

Internet Securitystandard post icon

DNS Firewall to Enhance Your Networks Security | CleanBrowsing

Published in Security on October 7, 2019

DNS is the internets lookup table, it builds a bridge between the domain name (e.g., perezbox.com) and the IP address (e.g., 184.24.56.17). The IP address being where you can find the server that hosts the domain. In addition to its job as a lookup table, it can also serve as an effective security control.

DNS is light weight, doesn’t require an installation, highly effective, conforms to the TTP’s employed by attackers, and, more importantly, affordable.

This article will introduce the concept of a DNS Firewall (Protective DNS) and encourage you to think of it as an additional layer in your security governance program.

Mitigating Attack Tactics

Understanding how attackers leverage domains in their attacks allows us to appreciate how effective DNS can be. Here are few tactics, techniques and procedures (TTP) leveraged by attackers that helps illustrate the point:

Benign WebsitesAn attacker compromises a benign site (domain), it’s used to distribute malware, or perform other nefarious activity (e.g., Phishing, SEO Spam, etc…)
Malicious SiteAn attacker creates a malicious site (domain), it’s sole purpose is to distribute malware, or perform other nefarious activity (e.g., Phishing, SEO Spam, Dropper, etc…)
Command & Control (C&C)Command and Controls (C&C) is what an attacker uses to facilitate their orchestration. Payloads will phone home to C&C’s for instructions on what to do next.

The scenarios above both leverage Fully Qualified Domain Names (FQDN) for the site to render.

Example 1: The 2019 Mailgun Hack

In 2019 there were number of WordPress hacks that exploited a vulnerability in a well known plugin. This exploit affected thousands of sites, including the popular Mailgun service.

Attackers used their access to embed JS code on the sites that would initiate calls to a number of different domains: hellofromhony[.]org, jqueryextd[.]at, adwordstraffic[.]link. These domains would then initiate different actions (including stealing credit card information) depending on the request.

The embedded JS payload initiates a DNS request.

Example 2: Managing Multiple Servers

Assume you are an organization responsible for 100’s, if not 1,000’s, of servers. An attacker bypasses your defenses and moves laterally through the network. In the process, the attacker sprinkles droppers across the network designed to phone home to their C&C.

The phone home initiates a DNS request.

Example 3: – Mitigating User Behavior (Phishing)

If there is something we can always count on is curiosity always kills the cat. Users always click.

Clicking the link initiates a DNS request.

The Effectiveness of DNS

The examples above are only a few that quickly illustrate how DNS can be leveraged to mitigate attacks. To help support the case, we can look at the Verizon Data Breach Investigations Report (DBIR).

Analyzing a five year period, 2012 through 2017, you find that close to a third of the 11,079 confirmed data breaches were identified to be threat actions that DNS could have mitigated (source: Global Cyber Alliance 2018). Having a security control with 1/3 control relevance is pretty impactful for any organization.

With DNS as the backbone of how the internet works, any time a domain is queried DNS is, by design, triggered. Consider the different scenarios above, and you quickly realize that DNS is the gateway that all requests have to pass through.

DNS Firewall (Protective DNS) as a Security Control

The illustration above shows how and where a DNS Firewall might fit in your networks architecture.

The DNS firewall will inspect the initial query, verifying that it’s safe, before allowing it to proceed with the rest of the DNS communication chain. There are a number of great DNS Firewall services; I personally leverage the CleanBrowsing Security Filter (it’s Free and highly effective).

IPv4 address: 185.228.168.9 and 185.228.169.9
IPv6 address: 2a0d:2a00:1::2 and 2a0d:2a00:2::2

If you run your own internal DNS you want to look into leveraging Response Policy Zones (RPZ). RPZ is a security specification and protocol to enhance DNS resolvers with security intelligence about the domains it is handling. It allows a local DNS resolver to restrict access to content that is malicious or unwanted. It allows you to create your own DNS Firewall.

This deployment is applicable to large organizations and homes alike. :)

Trusted Security Information WordSeshstandard post icon

Mozilla Introduces Mechanism to Hijack all DNS Traffic in the Name of Privacy

Published in Security on September 16, 2019

In September of 2019 Mozilla will begin releasing DNS over HTTPS (DOH) in Firefox via their Trusted Recursive Resolver (TRR) program. A primer on DNS Security.

The change is based on a theme we’ve heard before: a) the old protocols don’t take security and privacy into consideration, and b) there is the threat that people can see what you are searching.

This should sound familiar, we saw a similar campaign driven by Google with their #httpseverywhere campaign in 2014 – 2018.

In both instances, these organizations are trying to tackle fundamental flaws in the technology fabric we all depend on. The difference being in how the problem is being approached.


Technically speaking, I don’t have an issue with the idea of making DOH available. I do question whether a system level control should shift to the web layer. What gives me heart burn is with their implementation – they are enabling it ON by default, without asking the consumer. They are also partnering with CloudFlare as their default DOH service provider; this means every request you make on Firefox will go to a private organization that the consumer has not chosen. For me, this is a serious breach of trust by the organization that is waving the trust banner.

In contrast, Google’s implementation will be set OFF by default. It will also allow the user to choose the DOH provider of their choice.

Why Should You Care about Mozilla’s DOH Implementation

If you are someone that is responsible for controlling what happens on your network, you should care a lot. The default implementation by Mozilla is, for lack of a better word, a Virtual Private Network (VPN) that allows anyone using Firefox to bypass whatever controls exist on a network.

A few examples of what this means:

  • Let’s assume you are a school. You have 100’s of kids on your school WiFi. You have implemented your own DNS resolver to protect kids from malicious sites or to stop them from accessing pornographic, or obscene content. This new implementation will make it so that your kids can now bypass your web controls.
  • Let’s assume you are a parent. You worry about what your kids have access to when they are surfing the web. You deploy a network tool to help you control what they can and can’t access while inhibiting the way they interact on the web. This new implementation will make it so that your kids can now bypass your web controls.
  • Let’s assume you are addicted to porn. A very real problem. You deploy controls on your network to prevent yourself from accessing obscene content (something is sometimes uncontrollable for the afflicted). This new implementation will make it so that you can now bypass your own web controls.
  • Let’s assume you are security engineering inside an enterprise NOC. You are chartered with analyzing traffic to ensure malicious traffic is not coming, or out. This new implementation will allow anyone on your network to bypass whatever controls you might have in place.
  • Let’s assume you are a government that is trying to implement new regulations to hold ISP’s responsible for child pornography and other nefarious acts online. This new implementation would prevent this.

These are only a few, crude, examples meant to highlight the seriousness of the chosen deployment by Firefox.

What Can You Do About Mozilla’s Implementation

If you prefer to retain control of your network and not allow Mozilla to make default choices for you, you have a few options:

Option 1: Leverage a Network Content Filtering Service That Disables By Default

If you are a parent, school, or large organization you can use a cloud-based DNS content filtering service like CleanBrowsing to help mitigate this change.

If you are a large enterprise, you can signal to FireFox that you have specific controls in places and the DOH deployment should be disabled.

Network administrators may configure their networks as follows to signal that their local DNS resolver implemented special features that make the network unsuitable for DoH.

DNS queries for the A and AAAA records for the domain “use-application-dns.net” must respond with either: a response code other than NOERROR, such as NXDOMAIN (non-existent domain) or SERVFAIL; or respond with NOERROR, but return no A or AAAA records.

Make note of this very important caveat in their release notes: If a user has chosen to manually enable DoH, the signal from the network will be ignored and the user’s preference will be honored. Depending on your organizations position on this, you might want to consider Option 3.

Option 2: Disabling DNS-over-HTTPS in Firefox

You can disable DoH in your Firefox connection settings:

  1. Click the menu button  and choose Preferences.
  2. In the General panel, scroll down to Network Settings and click the Settings… button.
  3. In the dialog box that opens, scroll down to Enable DNS over HTTPS.
    • On: Select the Enable DNS over HTTPS checkbox. Select a provider or set up a custom provider.
    • Off: Deselect the Enable DNS over HTTPS checkbox.
  4. Click OK to save your changes and close the window.

Option 3: Remove Firefox

As extreme as an option as this might sound, I have spoken with a few enterprise CISO’s that are considering the option of removing Firefox from their network. Their reasoning revolves around two distinct positions: browsers assuming too much control and treating Firefox as a VPN. Which seems to be the direction they are intentionally heading.

The Evolution of a Critical Piece of the Web – DNS

A critical piece of the web is evolving and for most consumers you have no understanding or appreciation for what that means, but the implications can be dramatic.

Regardless of which side of the fence you’re on, there is a mutual desire amongst technologists to to ensure a more secure, private, web; the question, however, is how you implement it.

I’ll dive deeper into the specifics of the community politics, and technical details between the options, in future articles. If you absolutely can’t wait, I encourage you to read this great article by one of my colleagues at GoDaddy, Brian Dickson with our DNS team – DNS-over-HTTPS: Privacy and Security Concerns

standard post icon

Rethinking the Value of Premium SSL Certificates

Published in Security on August 12, 2019

There is an active campaign to reshape how online consumers see SSL certificates, with special interest in shutting down premium certificates by the browsers and security practitioners. This article will shed some light into what is going on, provide some context as to why it’s happening; and it will also offer my own personal opinions and recommendations for the future.

In summary, premium certificates – specifically EV’s – offer more value than we’re letting on because we’re allowing the wrong things cloud the conversation.

Making Sense of the SSL Ecosystem

I recommend reading my primer on SSL, specifically how HTTPS works.

An SSL Certificate is a digital file that binds an identity with a public key and a cryptographic private key. This file is used to verify and authenticate the owner (an identity) of the certificate during a communication exchange between two systems. This SSL Certificate is also what allows you to make use of the HTTPS / TLS protocols on your website.

A site that is leveraging HTTPS/TLS makes use of an SSL certificate to accomplish two goals:

  • Authenticates the identity of the website to the site visitor;
  • Protects, via Encryption, information as it’s transmitted from the web browser to the web server. It ensures that data in transit cannot be intercepted (e.g., MiTM attack) by a bad actor;

Here is a great example:

What you see in this example is that this certificate was issued to the godaddy.com domain by the GoDaddy Certificate Authority (CA). These CA’s are responsible for the creation, issuance, revocation, and management of SSL certificates.

How SSL Certificates Are Created

How they go about performing these duties are defined by a voluntary organization known as the Certificate Authority / Browser (CA/B) forum. The output of this forum is something known as the Baseline Requirements (BR), and these BR’s are the rules by which CA’s must abide by if they want their certificates to be recognized by something known as the browsers root store.

Being in the browsers root store is critical for a CA. To appreciate the importance of the browsers root store simply go to September 2017 when Chrome distrusted Symantec’s root certificate. The impact of being distrusted results in every certificate issued by the CA rendering a page like this:

So yes, having a publicly trusted root is the bloodline of every CA. These root certificates are used in the issuance of certificates, and as long as the CA follows the rules defined by the BR’s then root stores will “Trust” the CA’s root certificate in their root store.

Type of SSL Certificates

Under the rules set forth by the BR, CA’s have the ability to issue a number of different certificate types.

For the purposes of this article I’ll focus only on three:

Domain Validation (DV)Validating the Applicant’s ownership or
control of the domain.
Organization Validation (OV)Validating the Applicant’s identity as a company or individual and the domain.
Extended Validation (EV)Validates the legal entity that controls the website. this is the most stringent validation process available.

A couple things to clarify:

  • All certificates function the same in protecting information in transit, you’re not getting a higher or lower degree of encryption with either certificate, the encryption ciphers are set by the web servers and the minimum values are defined by the BR’s;
  • The thing that has always differentiated these certificates to the public has been their treatment on browsers;
  • The treatment for DV / OV certificate are the same on browsers, and EV’s have always been that special option;

Treatment of SSL Certificate Types

The thing that has always set the certificate types apart has been their treatment on the browser User Interface (UI). The original premise of the treatment was to enable the web users, like you, to quickly delineate those sites that had gone through additional scrutiny in their validation process.

For these examples I’m going to focus on Chrome because it’s the most widely adopted browser in the market (55% market share as of July 2019). They are also the ones leading the fight against premium certificates and the changes I’ll highlight below.

Here is an example of what an DV / OV certificate might look like in the URL inside the Chrome browser today (in 2019):

Here is an example of what an EV certificate might look like in the URL inside the Chrome browser today (in 2019):

Here is an example of what the certificates used to look like:

As you look through the examples above you can quickly see what is happening. The treatment of EV certificates is changing dramatically. In earlier versions it was easy to point out those sites that had gone through higher scrutiny in their validation process, and in theory it should have given web users a higher degree of confidence in the legitimacy of the site.

Here is an example of what you can expect in future releases of the Chrome browser:

What you see above is work being done by Google to remove the indicator all together. You can expect the final iteration to potentially look very different than the proposal above.

The genesis of why can be found in Google’s release of a research paper titled The Web’s Identity Crisis: Understanding the Effectiveness of Website Identity Indicators.

The entire paper boils down to this:

In 14 iterations on browsers’ EV and URL formats, no intervention significantly impacted users’ understanding of the security or identity of login pages.

Authors: Christopher Thompson, Martin Shelton, Emily Stark,Maximilian Walker, Emily Schechter, Adrienne Porter Felt – Google

In other words, there was no perceived value of the UI indicators. Because there is no value, Google will proceed with removing them (in the form of burying them deep into secondary panels). You can expect that the next analysis will show that users do not click on the secondary panels, as such their value is further diminished.

Discourse Makes a Solution Difficult

Here are some of my personal observations, points of contention and positions across both sides of the aisle as to why premium certificates are ineffective:

  • Even amongst security professional few truly understand the difference between certificate types;
  • We never really brought about good awareness to what these indicators were meant to signify;
  • The CA/B forum is comprised of a lot of attorneys, this creates a very CYA like approach to development of BR’s – in other words, we avoid anything that might imply liability. This framing makes it difficult, we shy away from things like “assurance” and “trust” and creates an environment of extreme interpretations;
  • Massive commercial entities were built around these SSL certificates, such that any perspective from a CA is immediately dismissed because it’s believed to be impartial and beholden purely to commercial interests;
  • There are real challenges like collisions in the systems, where two entities could exist with the same name, established under different jurisdictions. Which technically, isn’t really a problem if it’s a legitimate entity;
  • We inaccurately try to place value on premium certificates on things like security (e.g., premium certificates curtail phishing). This narrative derails and distracts the conversation;
  • Perception of issues exist with the fact that you can have a validated entity that is not the same as the domain (e.g., domains owned by franchises). Which technically isn’t a problem if we refine the meaning of the value of the premium certificate and the assurances it provides;
  • As a community there is an “us” vs “them” mentality, where the browsers are good and the CA’s are bad. This has led to a contentious, toxic, relationship between both parties, which does little for the web;
  • We lean on security whenever there is no valid answer, never differentiating between practical and theoretical security;
  • We claim to be considerate of the greater web, but share very little empathy for the challenges we’re introducing to the consumers (both micro-businesses, large organizations, and passive consumers) of the web;
  • The advent of social platforms has given a platform to pundits all around the world, experts and influencers alike, that amplify and convolute the conversation in the interest of goodness, fairness and security while simultaneously adding emotion and unreasonable candor making it impossible to collaborate for a better outcome – then again, this affects almost every industry these days;
  • The validation process requires humans, humans are fallible, and it precludes us from automating and making it available to the masses in scalable manner;
  • Traditionally, CA’s have been perceived to be stuck in their ways, my own organization included, incapable of keeping up with the evolution of the web – we are probably our own worst enemy;

The Unrecognized Value

Studies have been conducted on both sides of the aisle. On the browsers’ side, a study by Google (The Web’s Identity Crisis: Understanding the Effectiveness of Website Identity Indicators) showed that web users don’t recognize value in UI indicators. On the CA side, you have a study by Georgia Tech (funded by Sectigo) (Understanding the Role of Extended Validation Certificates in Internet Abuse) which tries to show a low propensity for validated domains to be used for malicious purposes. Whether you agree with the methodologies leveraged or the outcomes they offer, I believe the unrecognized value is somewhere in between.

I believe that Google is right, in today’s incarnation of the UI indicators it is absolutely realistic to believe that web users have no understanding of what they mean. I also believe, to an extent, that Georgia Tech’s study (while a bit limiting) speaks to a truth in the low propensity of a validated organization to be used for malicious purposes.

I believe we are missing some really interesting opportunities to help bridge the trust gap online through a structure that is already in place:

  • The validations being done for certificates like EV’s, whether we like it or not, and regardless of what the BR’s state, should facilitate a level of assurance of legitimacy to web users.
  • While not perfect, the public Web Trust ecosystem built between browsers and CA’s can be the building blocks for something that has a dramatic impact on the great problem of identity assurance and trust on the web.
  • There is some validity to the idea that a site that has a premium certificate, specifically EV, has a lower propensity to be used for malicious purposes. It’s not so much the cost, but more the level of effort required to forge all the required documents and forms of proof (which sometimes requires updating gov’t systems).
  • Validating an entity is valuable, whether they are doing something malicious or not. The process of validating helps collect real information that can be used later if required.
  • Another anecdotal insight comes in what the idea of “validating” actually means to a domain holder. It’s arguable that an organization that is going through the process of validating their domain cares enough about their identity, their security, to have more controls than the average Joe to ensure the integrity of their site. This is especially important when you think how most Phishing attacks happen today (i.e., benign sites being hacked and being used maliciously).

Where I disagree is in the statements that removing the UI indicator is the solution or that EV’s deter phishing attacks.

A failure to understand the indicator doesn’t mean the indicator isn’t valuable, but rather that we should work harder to pull the value forward.

Ironically, there is probably no greater example of the power of awareness and education than Google’s very own #httpseveryhwere campaign. A campaign in which Google drove home the importance of a HTTPS/SSL indicator by leveraging their greatest asset – SERP rankings. This initiative worked to educate consumers to look for the “lock” and the “secure” indicators, which makes me believe we can educate web consumers.

We live in a world where trust online is growing in importance. As such, we should be leaning into solutions that help pull forward that value. There are over a billion websites live on the web, and growing. Web consumers struggle every day with understanding what websites they can / should interact with.

As a community we should revisit the value and purpose of the premium certificates, specifically EV’s, and place emphasis around things like “trust” and “assurance.” We should work to pull that value forward in a way that we can help consumers differentiate and recognize easily.

Disclaimer

In full disclosure, I’m GoDaddy’s General Manager (GM) for the Security product group. This business line includes GoDaddy’s Certificate Authority (CA), which means we sell SSL certificates. The portfolio has considerable depth in the presence domain; features like a Web Application Firewall (WAF), Content Delivery Network (CDN), Security Scanning, Brand Monitoring, Incident Response, Premium DNS, Website Backups and the Sucuri brand.

Securitystandard post icon

ANALYZING SUCURI’S 2018 HACKED WEBSITE TREND REPORT

Published in Security on April 15, 2019

The Sucuri team recently released their second annual security report for 2018 – Hacked Website Report 2018. It looks at a representative sample of infected websites from the Sucuri customer base ONLY. This report helps understand the actions taken by bad actors once they penetrate a website.

This report analyzed 25,466 infected websites and 4,426,795 cleaned files; aggregating data from the Threat Research Group. This is the team that works side-by-side with the owners of infected websites on a daily basis, and are also the same team members that generate a lot of the research shared on the Sucuri Blog and Sucuri Labs.

This report is divided into the following sections:

  • Top affected open-source CMS applications ​
  • Outdated CMS risk assessment​
  • Blacklist analysis and impact on webmasters​
  • Malware family distribution and effects​

This post will build on the analysis found in the report, and share additional insights from the reports webinar.


CMS ANALYSIS

The analysis shows that in 56% of the hacked sites the core platform was patched with the latest version. The real insights, however, come into focus as we dive into the specific CMS’ distribution in the sample base.

2018 – Sucuri CMS Distribution fo Out-of-Date Core at point of Infection

Although WordPress is the one platform that is the most up-to-date at the point of infection, it continues to be the # 1 platform we see in our environment.

2018 – Platform Distribution in Sucuri Sample Base

This is undoubtedly related to Sucuri’s popularity in the platform ecosystem, but with 60% market share of CMS applications, and 34% of the websites on the web its representation is also understandable. What this also highlights is that something else is contributing to these hacked sites.

2018 – WordPress Out-of-Date State at point of infection

WordPress Version – In the 2016 report, 61% of the WordPress sites had been out of date at the point of infection. In 2018, this number dropped to 36.7% (2017/39.3%). Overall I’d say that’s pretty amazing, and a direct reflection of the hard work by the WordPress security team to introduce and deploy auto-updates.

E-commerce Platforms – The platforms that do concern me the most are the platforms used for online commerce. They represent a big % of the platforms that are out-of-date at the point of infection – Magento (83.1%), OpenCart (91.3%) and PrestaShop (97.2%). These are the core applications users are leveraging to perform online commerce transactions. This is especially concerning, because unlike WordPress, these platforms are still experiencing critical vulnerabilities in its core. Coincidently, these are also the platforms that have security obligations set forth by the Payment Card Industry (PCI) Data Security Standards (DSS), one if which includes keeping software up-to-date (requirement 6).

PCI Requirement 6.2 Ensure that all system components and software 
are protected from known vulnerabilities by installing applicable
vendor supplied security patches. Install critical security patches
within one month of release. - 2018 Payment Card Industry (PCI) Data
Security Standard, v3.2.1

Another theme you’ll find with this cohort is that they are also the platforms whom struggle with backwards compatibility. This speaks directly to the complexities associated with these platforms to upgrade, which when coupled with human behavior, is a recipe for disaster.

Common Issues & Threats

While the report does show an increase of WordPress sites year over year, it’s not indicative of the platform being more or less secure. The leading contributions to websites hacks, holistically speaking, can be boiled into two key categories:

  • Credential Stuffing (Brute Force Attacks)
  • Exploitation of Vulnerabilities in Third Party Ecosystems

I won’t spend much time talk to credential stuffing, the act of stuffing access control access points with different username / password combinations; instead I want to focus our discussion on the third party ecosystems.

The accompanying webinar did peal the layers back on the threats posed by the third-party ecosystem (e.g., plugins, modules).

2018 – Identified and Analyzed Vulnerabilities in CMS Third-Party Ecosystems

Of the 116 WordPress vulnerabilities Sucuri identified, 20 were categorized as severe (17%), and another 28 in Joomla! (50%). Of the 196 total vulnerabilities, 35 had an installation base over 1 million users. 2019 has seen a spike in the number of vulnerabilities hitting the market; to date, WordPress severe vulnerabilities are 50% of the total identified in 2018.

The one platform you don’t see in this analysis is Magento. For that, I would leverage insights from Willem’s Lab. His insights on the platform and its ecosystem are spot on; unlike WordPress, Magento has predominantly been plagued with issues from core vulnerabilities (e.g., ShopLift crica 2015), but the end of 2018 and beginning of 2019 is seeing a shift in which the platform’s third-party ecosystem is becoming the attack vector of choice.

Note: If you’re a Magento operator, I encourage you to leverage the new central repository of insecure modules released by a group of Magento professionals. A similar repository exists for WordPress.

BLACKLIST ANALYSIS

The report highlights the distribution of blacklisted and non-blacklisted sites at the point of infection. This illustrates a) the different indicators of compromise and b) the effectiveness and reach of blacklist authorities.

2018 – % of Hacked Websites Blacklisted at Point of Cleanup

This year we saw a 6% drop (17% -> 11%) in the blacklist state of the sites worked on. It’s difficult to say exactly why this is, but it’s likely related to how these blacklists operate. It does highlight the need to have a comprehensive monitoring solution set as part of your security controls, depending solely on authorities like Google, Norton, and McAfee is not enough.

This becomes even more evident when you look at the detection effectiveness across the different authorities.

This year we saw Google drop from 12.9% to 10.4%, and we also saw Yandex join the the top 4 (previously it was not material enough to rank). We also saw McAfee drop about 4% and Norton continue to lead the detection rate at 46.1%.

Not all blacklist authorities are the same.

Google is the most prominent because it’s the one that most browsers leverage, most commonly Chrome. The Sucuri team put together a great guide to understand the different Google warnings. When it detects an issue it presents the users with a red splash page – stopping a visitor dead in their tracks.

2018 – Example Google Blacklist Block

Other entities however are effective for a different reason; for instance, when Norton and McAfee flag you this implies anyone using their desktop AV client will be prevented from visiting the site or at least notified of an issue. These entities also share their API’s with a number of different services and products, great example is the use of McAfee in Facebook to parse malicious domains.

2018 – Example AV Blacklist Block

Being blacklisted by one doesn’t necessarily mean the other will, and being removed from one doesn’t mean others will respect this state change. This introduces a lot of stress and frustration with website owners. The best approach managing this is to register with as many of them as possible so that you can maintain direct relationship with each:

  • McAfee Site Advisor: http://trustedsource.org/
  • Norton SafeWeb: https://safeweb.norton.com/tags/show?tag=WebMaster
  • Yandex Webmaster: https://webmaster.yandex.com/
  • Google Webmaster: https://www.google.com/webmasters/#?modal_active=none
  • Bing Webmaster: https://www.bing.com/toolbox/webmaster

MALWARE FAMILIES

This section shows you what attackers are doing once they have access to your environment. It helps shed light on “intent”.

2018 – Malware Family Distribution (Sucuri Labs)

It is very common to have sites with more than one payload, which is why the report represents sites with multiple malware families. Backdoors are a great example of the type of thing you can expect to find in any compromise.

Backdoors are payloads that are designed to give attackers continued access to an environment, bypassing existing access controls. They were found in 68% (modest 2% drop from 2017) of the infected sites analyzed. Backdoors will be one of the first things an attacker will deploy to ensure that even if their actions are found, they can retain access and continue to use the site for nefarious actions. It is one of the leading causes of reinfections, and the most commonly missed payload.

2018 – SEO Spam Growth

Last year I called out the continued rise of SEO Spam, this year was no different.

This is the result of a Search Engine Poisoning (SEP) attack in which an attacker attempts to abuse the ranking of your site for something they are interested in. Years ago this would be almost synonymous with the Pharma Hack, but these days you see attackers leveraging this in a number of other industries (e.g., Fashion, Loans, etc..). You can expect this in any industry where impression based affiliate marketing is at play.

Example Site with SEO SPAM

The teams analysis highlighted an impressive increase (78%) in the number of files being cleaned in every case. This shows the pervasiveness you should expect after every compromise.

2018 – Sucuri Report – # fo Files Affected Post-Compromise

It’s not enough to clean the file you see, but instead perform a deep scan across files and databases to ensure everything has been removed.

Of the files affected, there were some trends in the file types targeted. The Top 3 modified files were the index.php (34.5%), functions (13.5%), and wp-config.php (10.6%) file.

Every file saw an increase over 2017, and there was a change in the top three – .htaccess dropped to make remove for wp-config.

2018 – Top Three Files Modified to Post-Compromise

The report outlines how each of the files are being leveraged, specifically for what malware families.  These three files are not a surprise, they are popular because they load on every site request, belong to core files, and are often ignored by integrity monitoring systems.

Great details by Fio Cavallari & Denis “Unmaskparasites” Sinegubko on what attackers are using these files for.

Index.php

  • Approximately 34.5% of sites had their index.php files modified after a compromise.​
  • The index.php file is modified by attackers for a variety of reasons including malware distribution, server scripts, phishing attacks, blackhat SEO, conditional redirects, and defacements.​
  • 24% of index.php files were associated with PHP malware responsible for hiding a file inclusion.​
    • This malware calls to PHP functions like include and include_once by replacing the file path characters with corresponding Hexadecimal and mixed up alphabetic characters.​
  • 15.8% of index.php files were affected by malicious PHP scripts disguised using absolute paths and obfuscated characters and hidden within seemingly innocent files.​
    • Instead of injecting full malware code into a file, this method makes the malware more difficult to detect by using PHP includes and obfuscation.​

Functions.php

  • 13.5% of compromised sites had modified functions.php files, which are often used by attackers to deploy SEO spam and other malicious payloads, including backdoors and injections.​
  • Over 38% of functions.php files were associated with SEO spam injectors:​
    • Malware that loads random content from a third-party URL and injects it on the affected site.​
    • Able to update configurations through a remote command.​
    • Doesn’t explicitly act as a backdoor but can use the function to load any kind of code – including a backdoor.​Usually found on nulled or pirated themes and plugins.​
  • 8.3% of functions.php files impacted by generic malware.​
  • 7.3% of files associated with PHP.Anuna, which injects malicious code into PHP files.​
  • Malicious payloads vary from spam injection, backdoors, creation of rogue admin users, and a variety of other objectionable activities.​

WP-Config.php

  • wp-config.php was the third most commonly modified file (10.6%).​
  • Contains sensitive information about the database, including name, host, username, and password. It is also used to define advanced settings, security keys, and dev options.​
  • 11.3% of wp-config.php files were associated with PHP malware responsible for hiding a file inclusion, also commonly seen with index.php.​

CryptoMining and Ransomware

As we talk about what attackers are doing post-compromise, it’s worth spending a few minutes on Cryptomining and Ransomware.

In 2017, we saw the rise of Ransomware across the entire InfoSec ecosystem. It’s impacts on websites, however, were marginalized because of its ineffectiveness; mitigating a ransomware attack on a website is relatively straight forward, have a backup.

Cryptomining, however, is a bit of a different story.

Relationship Between Crypto Currency and CryptoMining Activity (2018 CheckPoint Report)

Cryptomining, the act of verifying and adding different forms of cryptocurrency transactions to the blockchain ledger. This process is the necessary step to adding to the ledger, and under this model the spoils belong to the group that processes the request the fastest. To achieve this you need processing power, this is where sites and their associated hosts come into play.

Although we saw a decrease in cryptomining activity in 2018, it’s an interesting payload to pay special attention to.

What you see in CheckPoints report is the correlation between the “value” of a coin, and cryptomining activity. In other words, as the price of cryptocurrency increases (think back to the 1: $19k) so did the cryptomining activity.

Analyzing this behavior (Thanks Thiago), you also find the following actions:

  • 67% of all Cryptomining signatures were related to client-side infections with JavaScript based miners like CoinHive.​
    • This means these payloads are abusing your browsers processing on your users local machines (ever go to a site and your browser dies? or it starts chewing up a lot of local resources?)
  • Remaining 33% of Cryptominers were server-side and used PHP to mine digital currencies.​
    • This means these payloads are abusing your host server, the server housing your website. This can lead to your hosting provider shutting down your site or you might experience degraded performance on your site.

I am particularly fond of this payload because it’s a great example of what we can expect form attackers when incentives are aligned. While I don’t really expect to see much activity with website ransomware, I do expect to see more with cryptomining when the incentive increases (e.g., value of cryptocurrency increases again).


I encourage you to read through Sucuri’s Hacked Website Report for 2018. It’s perfect for a website owners to understand the threats they face as they get their ideas online.

If you’re an online consumer and wondering how you can protect yourself from falling victim to hacked websites, then I encourage you to spend some time learning more about how DNS resolvers, like CleanBrowsing, can help keep infected websites from reaching your devices.

Watch Sucuri’s webinar, with yours truly, below:

Sucuri 2018 Hacked Website Webinar
standard post icon

The Evolving World of DNS Security

Published in Security on March 2, 2019

I was recently at an event listening to representatives of ICANN and CloudFlare speak on security with DNS and it occurred to me that very few of us really understand or appreciate its nuances. It also so happens that the past 5 years have brought forward a lot of curious, and interesting, developments in one of the last untouched founding components of the internet.

DNS Primer

The Domain Name System (DNS) is comprised of a number of different Domain Name Servers (DNS). I wrote an article that offers an illustration and better understanding of how the entire DNS ecosystem works together. There is an even cooler illustration explaining how DNS works.

Read More

standard post icon

Installing OSSEC on Linux Distributions

Published in Security on January 3, 2019

The last few posts have been about deploying and configuring OSSEC as an important tool in your security suite. In this article I will provide you a script I wrote to help you quickly deploy OSSEC.

This script assumes you are deploying on a Linux distribution (e.g., Fedora, Ubuntu, CentOS, or Debian). It will force you to choose a distribution OS before it runs, this ensures it installs the appropriate dependencies based on the distribution type.

Read More

standard post icon

OSSEC FOR WEBSITE SECURITY: PART III – Optimizing for WordPress

Published in Security on December 13, 2018

The previous OSSEC articles went through through the process of installing OSSEC and deploying a distributed architecture. This article will focus on configuring OSSEC to make better sense of WordPress activity.

WordPress is a powerful open-source Content Management System (CMS). Its biggest security weakness has always been its biggest blessing – its extensibility (e.g., plugin, themes, etc…). The years at Sucuri have taught me that post-compromise there is nothing more important than have good logs. They are the key to understanding what happened. They are also the key to identifying a bad actors intent before their actions materialize into something nefarious.

Fun fact: The premise of the Sucuri Security plugin was almost exclusively for this visibility. Over the years we added more features to accommodate a more robust application security toolset, but that was always a secondary objective. In fact, the premise of the Sucuri plugin was actually built based on the lessons Daniel learned with OSSEC. 

Read More

standard post icon

OSSEC For Website Security: PART II – Distributed Architectures Using Agents and Managers

Published in Security on November 30, 2018

This article assumes you already have OSSEC deployed. If you need a refresher, refer to the Part I of OSSEC for website security, written March 2013.

OSSEC is popular open-source Host Intrusion Detection System (HIDS). It was founded by Daniel Cid, and currently maintained by a very large community of security professionals. Please note that I don’t my installations off the official repo, instead I run directly off Daniel’s repo (instructions in the last post).

In the following series I’m going to share the foundational elements of my OSSEC deployments. We’ll start by placing emphasis on the importance of deploying a distributed architecture in this article, making use of the Agent / Manager options. In future articles you can expect insights into the best way to configure for CMS applications like WordPress, tuning the engine to make use of the alerts, and notifications.

If you have questions, don’t hesitate to ask.

Read More

Website Security Information By Tony Perezstandard post icon

How to enable 2FA on Twitter with Authy, Google Authenticator or another Mobile Application

Published in Security on November 29, 2018

It’s been a long time since I have had to enable 2FA on Twitter and found the process completely infuriating. Twitter’s 2FA configuration uses SMS as the default option, this is no longer advised by NIST.

We don’t have to look far to understand why; in the TTP’s leveraged to hijack a customers domain portfolio the weakest link was the attackers ability to hijack a users SIM card (i.e., which would lead to SMS hijacking).

It is recommended you leverage Time-based One-Time Password applications (e.g., Authy, Google Authenticator) for your 2FA needs. Unfortunately, doing this on the Twitter application requires multiple steps. This guide will walk you through the process.

Read More

standard post icon

Tips to Protect Your Domain[s] Investments

Published in Security on November 20, 2018

A few months back I was working with a customer that was having the worst day of their lives. Attackers had taken full control of their most critical digital asset – their domains and the domains of their customers.

The organization affected was an agency. They built and managed sites for their customers and in a relatively short period they lost access to their site and their emails. In this article I’ll share what happened, and offer tips that would have made things a lot harder for the attackers to hijack their domains.

Read More

Software Design Challengesstandard post icon

A Primer on DNS and Security

Published in Security on November 4, 2018

If you’re reading this article you’ve interacted with DNS. In fact, you’d be hard pressed to spend any time online and not interact with DNS.

Many of us spend very little time thinking about it. By design, it’s a “set-it and forget-it” tool that is often set up on our behalf (e.g., our home network, local ISP, office network). Ironically, it’s a critical piece of our security landscape.

This post will explain what DNS is and highlight some of it’s key security considerations.

Read More

standard post icon

How HTTPS Works – Let’s Establish a Secure Connection

Published in Security on October 28, 2018

The need to use HTTPS on your website has been spearheaded by Google for years (since 2014), and in 2018 we saw massive improvements as more of the web became encrypted by default. Google now reports that 94% of its traffic on the web is now encrypted.

What exactly does HTTPS mean though? And how does that relate to SSL or TLS? These are the more common questions I get when working with customers and in this article I hope to break it down for the every day website owner.

Read More

  • 1
  • 2
  • 3
  • …
  • 6
  • Next Page

Tony Perez CEO Sucuri

About Tony Perez

I've spent the better part of the past 15 years dabbling in various technical industries, and these days my focus is website security and business. This blog, regardless of topic is a chronicle of my thoughts and life as I navigate those things that interest me the most.

  • Facebook
  • Twitter
  • LinkedIn
How To Block Porn

Search

Recent Posts On Security

Three Things that DNS Outages Teach Administrator

NOC Introduces a CDN. Yes, a CDN.

Feelings Have No Place in the World of Security

Unleashing the Power of Authoritative DNS

Content Filtering with CleanBrowsing

View All Security Posts

Security Posts By Topic

  • Desktop And Operating System Security
  • Drupal Security
  • End User Security
  • Intrusion Detection System (IDS)
  • Joomla Security
  • Log Management
  • OSSEC
  • Passwords And Identity Management
  • Security Tools And Technology
  • Speaking And Media
  • Vulnerabilities And Malware
  • Web Application Firewall
  • Web Hosting And Web Servers
  • Web And Information Security
  • WordPress Security
  • WordPress Plugins

Like what I have to say?

Subscribe to hear more...

I don't always have something to say, but when I do I will aim to make it insightful. Subscribe to hear my thoughts as I make them available.

PerezBox

  • Facebook
  • Twitter
  • LinkedIn

Copyright © 2022 Tony Perez, PerezBox. All Rights Reserved | Security | Privacy