UNIXy Next – Refocus

I wish a very happy new year to everyone reading this blog entry. And especially to our clients to whom I give my most heartfelt thanks for continuing to place their trust in us.

The start of 2017 is an excellent time to reflect upon the past few years and set a clear direction forward. But first, we must reflect deeply on who we are as an organization, what got us here, and why we’re doing what we’re doing. The work we do at UNIXy is central to individuals, organizations, upstarts, and businesses needing to manage the increasingly complex world that is online presence. Simply put, we help people.

This reductionist thinking is important in that it externalizes itself in ways and actions that permeate our culture. Our culture lives on helping our clients meet their online needs. This is often embodied in our slogan that is Truly Fully Managed Servers. But as we zoom into our culture and out to what we actually do, we must be very specific about what is it that we do or don’t do for our clients. The minute we begin to outline the dos and donts, we’ve already failed them. And this is where reductionist stance of helping people collapses.

 

In light of a fluid and changing online world where multiple services are needed to bring an online presence to life, I’ve sat down to refocus our mission in a crystal-clear and all-encompassing way. The tenets of our managed services are enumerated in the above illustration so as to leave not an iota of doubt as to what you should expect from us.

The four tenets are: support, security, performance, and outreach. Outreach fills the gaping void that glues everything together so our clients succeed online. Outreach is the part where UNIXy goes above and beyond to meet the clients’ needs whether the totality of their services reside with us or partially elsewhere in the cloud.

I’m looking forward to the coming months as we execute on this mission.

Yours sincerely,

Joe Hmamouche

Load Balanced Bot-split Approach to Counter Excessive Bot Traffic. When Search Engines Work Against You!

Websites with a large number of links tend to get a good share of hammering from crawlers like Googlebot. This crawler traffic often leads to major slowdowns and can often knock your server offline. And there isn’t much you can do besides upgrading your hardware. You could easily block all search engine traffic and be done with it. But that would be a death knell to your online business. It’s a lose lose situation. That’s because you depend on indexing and ranking to get noticed online. There’s, however, a workaround to dilemma. We’re happy to share it with you here.

 

Atrapitis

 

Instead of increasing your hardware fleet to several folds your current needed capacity for the sole purpose of absorbing search engine traffic, you can leverage a dirt-cheap high-density RAM server to cache as much of your content on a separate server where all bot traffic will be “trapped.” See, we already know how to identify most search engine traffic. This triage of traffic is key to logically and physically route bots and visitors exactly where we want them to go.

 

 

2015-03-14 10.59.21

 

More concretely, we run Varnish on the load balancer node to programatically split traffic based on the visitor’s user agent string (Nginx is a possible option but some LB features are only available in the commercial version Nginx Plus). Bots have well known and documented user-agent strings. It takes a few VCL lines to switch traffic lines to their respective backends (read: servers).

if (req.http.user-agent ~ “(?i)bing|(?i)googlebot”) {

    set req.backend = bot;

}

else {

    set req.backend = default;

}

Now that we have bot traffic routed to where we want it to go, let’s go over what the “cache box” should be and do.

First of all, it’s a high-RAM node (commensurate with the size of your content) with a one or two-core CPU running Varnish Cache. In terms of software configuration, it needs to be near-identical to the “realtime” box because it needs to be able to run the very same website (same software requirements). The “realtime” box will regularly push the latest copy of its DB and synchronize files. “Regularly” could be any reasonable interval you wish that to be. But one hour is reasonable for most implementations. A couple of shell scripts should do it for both the DB dump/pump and file sync. Be sure to avoid table locks if you’re not running InnoDB.

It’s important to put a reasonably high TTL if you’re going to deploy a low-powered CPU on the bot box. A 2-day TTL for both static and assets and pages wouldn’t be unusual. Your cache hit won’t be efficient and will defeat the purpose of the whole setup otherwise. A simple cache warmer can render the whole setup super efficient (ex: runs on sitemap changes)

That’s all!

Has your own firewall blocked you? We have an app for that!

Server owners tend to be power users as they access multiple services (Web, control panel, email, SSH, etc) on their server within a very short time span. This can generally trigger an IP block at the firewall. If you happen to be on the go and haven’t had a chance to whitelist your mobile IP, you’re out of luck (well not really because we’re still here to help 24/7)! But power users are POWER users! So we empower them so they can unblock themselves without needing to enter a single request with us.

 

Firewall Unblock Tool

All you need to do from now on is access http://unblock.unixy.net/ and be unblocked from your server! And… done!

 

Enjoy

Largest DDoS Attack in the History of the Internet at 300Gbps

 

Spamhaus has been the target of a DDoS attack, which has been reported to be the largest ever recorded in the history of the Internet at 300Gbps. Very few data centers have the aggregate network/carrier capacity of 300Gbps. But a 300Gbps DDoS attack is sure to put a nail in the coffin for any provider. In fact, it’s such a massive attack that it caused congestion for all Internet traffic. We’ve published an article on a novel and affordable way to mitigate a small scale DDoS using readily available open source tools like Nginx and Varnish using VPS nodes or dedicated servers.

Be sure to read the article since it explains the where, why, what, and who of DDoS!