Blog Kommentare wieder aktiv (WordPress, CloudFront & Kommentare)

Aufgrund meiner letztjährigen Umstellung dieses Blogs auf einen Amazon AWS Server mit CloudFront Distribution, funktionierte anscheinend die Kommentarfunktion nicht mehr so wie sie sollte.

Bei meinem doch schon sehr populären Betrag zum Thema Hong Kong & Demonstrationen vor ein paar Tagen, ist mir die nicht funktionierende Kommentar-Funktion aufgefallen.

Als Resultat musste ich die Konfiguration meiner CloudFront-Distribution anpassen, sodass die Kommentarfunktion wieder ordentlich läuft und durch den eigentlich vorgeschalteten Cache-Filter durchdringt. Wer auch ein derartiges Problem hat, kann ganz einfach einen neuen “Cache Behavior” für die Datei wp-comments-post.php anlegen und diesen mit folgenden Settings konfigurieren:

Please follow and like us:

Migration: WordPress + MariaDB (MySQL) + PHP + EC2 + CoudFront

I previously wrote about how and why I was migrating away from Hosteurope to Amazon AWS. The basics steps sounded very simple:

  1. Backup databases and data directories on the old server
  2. Set up the new server including all software & services needed
  3. Restore all backups to the new server
  4. Change DNS setting to apply changes to all visitors

Sounds easy, right?

Everyone who’s done that kind of thing before, knows it’s never quite that easy. Doing the first step is quite straight forward. Not a big deal. For the second step I ended up using a combination of howtos to get everything set up properly:

  1. Official AWS EC2: Linux+Apache+MySQL/MariaDB+PHP
    Tutorial: Install a LAMP Web Server on Amazon Linux 2
  2. Official AWS EC2: WordPress
    Tutorial: Hosting a WordPress Blog with Amazon Linux
  3. Working CloudFront Config for WordPress:
    Setting up WordPress behind Amazon CloudFront
  4. General WordPress on t2.nano Instance:
    How I made a tiny t2.nano EC2 instance handle thousands of monthly visitors using CloudFront

My main goal was to run the smallest instance AWS has (currently t3.nano) which costs around $3.75 per month. It turned out, MariaDB (and for that also MySQL) does not start up properly at the nano instance with just 512MB of RAM. Therefore, I had to go at minimum with the t3.micro instance.

To ensure that load spikes are being handled properly and the server will not go offline during these peak times, a content distribution network (CDN) comes in handy. The great thing about AWS is that they’ve thought of all these scenarios and of course they’ve got a CDN solution ready to deploy. It’s called CloudFront. The tricky part here is, to have CloudFront kick in at the right time, because it caches content to deliver it from its own edge locations across the globe, but at the same time WordPress generates websites dynamically. So CloudFront needs to be able to work in that environment. Setting up CloudFront properly was the part that cost most time, but it works great now.

CloudFront Network Map

I have now deployed CloudFront with multiple sites all running on the same t3.micro instance. One by one I activated for CloudFront distribution and over the past couple of days the traffic handled by CloudFront is going up continuously.

CloudFront Cache Statistics

To run massive load tests I used Apache JMeter for the first time. It’s a monster when it comes to load testing and it took me about an hour to get it running the first time. You can literally configure everything on there.

As it is when you’re setting up new things, you’ll have many “new things” you’re working with. In my case, it was the first time I used MariaDB, which is a fork from MySQL. It was also the first time I worked with php-fpm, which “is an alternative PHP FastCGI implementation with some additional features useful for sites of any size, especially busier sites”.

So far I’m quite happy with the current set up. Let’s see how this performs over time. My sites are constantly under attack from bots who are trying to figure out passwords and gain access otherwise to those sites. Yet, every site and server can and will eventually go down under just enough Denial of Service attacks. At the current attack level we’re doing OK, but let’s see how long this lasts and what adjustments I’ll have to deploy.

Please follow and like us:

Amazon Basics – Ein Erster Test

Als Mac Nutzer habe ich natürlich eine Magic Mouse und eine kabellose Tastatur. Beide bekommen ihren Strom durch handelsübliche AA Batterien, obwohl die neuste Version der Magic Mouse ja mittlerweile einen Akku fest verbaut hat. Ich aber habe noch die alte Version und muss so lange noch Batterien nutzen.

Nun habe ich mich endlich durch gerungen einmal wiederaufladbare Akkus zu besorgen, damit ich wenigstens einen kleinen Beitrag der Umwelt zuliebe leisten kann. Es hat jedes einzelne mal geschmerzt, wenn ich nach ca. 1 Monat die Wegwerfbatterien wirklich auch wegwerfen musste.

Nach so einigen Online-Preisvergleichen habe ich mich zum allerersten mal für ein Produkt von Amazon Basics entschieden.  Das ist eine interessante Eigenmarke von Amazon, die in der Tat viel Sinn macht. Sie definieren sich selbst so:

“AmazonBasics bietet hochwertige Produkte für den täglichen Bedarf zu kleinen Preisen an, direkt zu Ihnen nach Hause geliefert.”

Ich habe dann zum Test also mal folgendes bestellt:

  1. Akku-Ladegerät für €13,19
  2. Ein 8er Pack Akkus mit der größten Speicher-Kapazität von 2.400mAh für €18,39

Bisher bin ich mit beiden äußerst zufrieden. Sie wurden mir Anfang Oktober in zwei separaten Päckchen geliefert. Zwei Akkus sind seitdem in meiner Mouse und haben heute noch verbleibende 88%. Wie auf dem Screenshot oben zu sehen ist, gehen die alten Batterien in meinem Keyboard bald zur Neige. Die werden dann also auch mit wiederaufladbaren Akkus von Amazon Basics ersetzt.

Insbesondere die Akkus mit höherer Ladekapazität von über 2.000 mAh sind von anderen namenhaften Herstellern (Varta, GP, etc) um einiges teurer. Vielleicht gibt es hier und da mal ein paar günstigere Angebote, aber in dieser konstanten Nachfrage-Nische von Verbrauchsprodukten will Amazon anscheinend mit deren kostengünstigen Amazon Basics Produkten punkten.

Ich bin ja mal gespannt, wie die sich so über ein Jahr lang schlagen. Denn meiner Meinung nach sollten die Akkus mindestens so lange halten, um sich zu rechnen.

Hier noch ein paar Impressionen von meinen Amazon Basics Akkus:


Please follow and like us:

Migration – Bye Bye Hosteurope – Hello Amazon AWS

I recently moved this blog from one old server to a new one. This is my first blog post on this new server. Let’s hope nothing explodes.

For many years I have been a more or less happy customer of Hosteurope. It is a hosting company headquartered in Germany (with hosting sites in other European countries). Many years ago I chose to sign up with Hosteurope for one of their so called “VPS” aka Virtual Private Server. That’s basically a virtual server running on one of their larger server clusters.

About a year ago I realized that my annual bill for that Hosteurope server was about €156 per year. That’s not a huge amount, but as I’ve been involved in cloud-hosting for many of our projects, I knew there are many other options that offer a way better price / value ratio.

Being lazy as humans are, I didn’t want to migrate my server away. In fact, it would have been enough for me if Hosteurope would have lowered my service charge to the price of the currently available equivalent VPS they are selling. Over the years, hardware has gotten cheaper and so has hosting gotten cheaper. In short: Newer VPS products of Hosteurope now cost less and provide more power.

I simply love attractive cost/value ratios.

So last year I reached out to Hosteurope and asked them wether they can offer me my old VPS for the new price (~€4 less per month or so). It’s not that much of a difference, but for the principle of it: I just like to be treated fairly. They did not agree to that and simply said: “You have to stick to your old product, or you may also terminate your contract.” Last year I missed that deadline — again, that wasn’t really a priority for me — and therefore, the contract renewed for another year. This year, though, I remembered and terminated on time.

AWS has almost unlimited capabilities

For over 10 years our businesses are now customers at Amazon Web Services. We’ve been with them from almost their first service. So I’ve been working with AWS for a long long time. We even had an incident where we literally spent $4000 USD in minutes — accidentally. That can happen if you “overdo” extreme automation 🙂

However, when using AWS properly, it can be very useful and cost-efficient. Especially, when we’re talking about hosting my small blog as well as some other sites I run on this server.

AWS has some great services I was able to use for this move. That includes their S3 Simple Storage, EC2 Elastic Cloud, CloudFront Content Delivery Network, CloudWatch Monitoring Service and many others well.

AWS’s own Amazon Linux 2 is also a great Linux distribution I’ve grown to like and I’m quite confident it keeps getting maintained for the next 10 years. The old Debian that was running on my Hosteurope server wasn’t being maintained by Hosteurope – especially as they had some funky customizations and source settings in there, which for the past years seem to not have gotten the love they deserved. Hence, I ended up with an outdated system.

I’m confident this will be better now. And besides, instead of paying €156 I’m expecting to pay at most €60 per year. That’s a fluffy 60% savings and I haven’t even factored in discounts at Reserved Instance costs.

Let’s see how this goes. For now, I’d be happy if nothing crashes in the next few days after this migration 😂


P.S.: Monitoring capabilities are also quite neat. I can monitor the performance at all times. Have a look:

Please follow and like us:

Scaling a Web Server to Serve 165 Million Ad Requests per Day

I am usually not the kind of guy who likes to boast with numbers, but in this case I believe it helps to put things into perspective. For over seven years I am in the mobile app business targeting consumers directly. Over the years we have created thousands of products, most of them paid, some completely free and some with ads.

For years, I did not believe in monetization of apps through advertising. Even today, I am still very skeptical about that because you need to have A LOT of ad requests per day so that any significant income can be generated from that. At the moment, our few (around 10) mobile applications that sport ad banners, generate up to 165 million (165,000,000) ad requests per day. That makes up to 5.115 billion ad requests per month. If you compare that to a fairly large mobile advertising company like Adfonic with their “35 billion ad requests per month” (src: About Adfonic) we are doing quite alright for a small app company. We are using a mix of different ad networks depending on what performs best on which platform. Besides that we are breaking it down into country/region to use appropriate ad provider that is best for that region. Part of that also involves ad providers who do not support certain platforms natively. So built server-side components to handle such ad requests in a “proxy” kind of way that still allows us to get ads shown in apps on a certain platform that is not officially supported by the ad provider. Basically, we have a mini website that just shows an ad; and we can have hundreds of thousands of mobile devices accessing this website from all over the world at the same time.

Using cloud service providers like Amazon AWS, Rackspace, SoftLayer, Microsoft Azure, or others any can serve a virtually unlimited number of requests these day. It all depends on your credit card limit. There are obviously usage patterns of programs of applications and over the course of the day, we have ups and downs. For example, at midnight GMT most people all over the world like to use our applications and therefore they request more ads. Five hours later, we experience the lowest traffic. Depending on application store release schedules, promotions, featured listings, user notifications, external promotions like on blogs or elsewhere, unexpected sudden spikes in traffic can occur any time. “Automatic Scaling” in combination with “Load Balancing” seemed to be the magic solutions for this.

After months of running a bunch of server instances behind a load balancer we were quite happy with the performance. It was easy for us determine usage patterns and see how many servers we need to serve the average maximum number of requests without our service to fail. We didn’t think that much about cost-optimization because our cloud computing bills weren’t that high; so we didn’t really work on such auto scaling components. That had two main disadvantages: Firstly, we spent more that we needed to as we probably didn’t need half of the server that were running while we had low traffic. Secondly, we were not prepared for sudden massive spikes in traffic. With TreeCrunch on the other hand, we are building a scalable system from the ground up.

So early this week I took three hours and looked into how to implement such auto-scaling. Actually, it took me two hours to install the tools properly. However, as you can see in the above charts, the current systems works in a way that if we hit a maximum amount of average traffic per server, a new server is being created and added to our load balancer — it then immediately starts serving ads. If the average traffic falls below a certain threshold, a server is being terminated and therefore stops serving ads. To minimize server load, we run really tiny PHP scripts that are extremely optimized just for the purpose of requesting an ad the desired ad company and delivering such ad to the client (mobile app). As web server we are currently using lighttpd which is very lightweight indeed. Interestingly, we noticed that there is not really a problem with “normal” system resources to handle a lot of requests. Our CPU usage is fairly acceptable (constantly around 50%), we don’t need any hard drive space as we are just proxy’ing requests and we don’t even run out of memory. The first limitation one of our ad servers hits, is the maximum number of used sockets which is by default around 32,000 (or something like that) on Debian-based linux. That’s more or less an artificial limitation by the operating, but we didn’t play with adjusting that one yet.

Summary: In a fairly short period of time, we managed to set up a proper auto scaling policy that allows to scale up to virtually unlimited numbers of ad requests. With fairly low budgets companies nowadays can setup proper data centers, serve millions of users and maintain their infrastructure with a few very talented people and without purchasing any hardware that would be obsolete a year or two later.

I love the time I am living in. Every single day something new and exciting pops up.

Please follow and like us: