Monday, May 30, 2016

Mac OSX Virtual Machine guest lags

If you have a Mac VM running as a guest on a non-Mac host (Windows in my case) you might experience some serious lag and graphic glitches like I did. Fixing it was pretty easy by disabling "BeamSync" - the Mac-equivalent to VSync if I understood that correctly. Anyway, you don't need it for normal Mac usage (i.e. no videos, games, etc I guess), so download "BeamSyncDropper" and keep on using your Hackintosh efficiently :)

You can download the tool + read instructions on how to use it here:

PS: Mac VMs are great for developers who only developer for iOS if they really have to! ;)

Saturday, May 28, 2016

Thoughts on implementing your own search engine

Imagine you have a big database of products which you want to make accessible to your customers via a search engine - what would you do? Of course you can bootstrap a first working version using a third-party solution, as we did with Swiftype. However, as with most other third-party solutions, you'll eventually hit a point where the third-party doesn't satisfy your needs anymore. In our case our "need" was simply improved search results tailored for our usecase, but because there's not many settings you can tweak in Swiftype and we couldn't find a viable alternative we decided to roll our own search engine. I had a chat with someone who feels very comfortable with databases and search engines in general and he gave me some tips to make our new search shine. Here's what he told me / what we eventually came up with:


The first and probably most important step is the tokenization of your data. It's translating a string like "first-party search engines rock" into a set of strings"first, party, search, engines, rock". So basically you split the string into all its words. Sounds easy? Almost, if it weren't for compound words and other funny language-specific characteristics. Compound words are not so common in English, even so that I can't think of one right now? But in German they are VERY common. If you can't handle compound words then your search probably sucks for German data. So our solution to this problem is to first create a set of "known words". In our case, we used the data we want to tokenize in order to tokenize it. Inception. So if we have a product called "milk" and another called "milkshake" we split the latter into "milk" and "shake" because we know that "milk" is a real word. Obviously this can also lead to false positives where you don't want to associate a product with "milk" although its name contains those characters, but that is a whole set of new problems we won't address today. Another set of "known words" could come from previous queries of your users. However, you should handle those separately, i.e. keep a set of "clean" and "dirty" keywords.


The next step would be to "normalize" words so that "run", "running" and "ran" are all associated with "run". This is called stemming. We skipped this step because it doesn't make much sense for our kind of data (mostly names, so no verbs) and is quite hard to implement - usually involving some kind of dictionary.


What you want to do next is to score your tokens so that we only need to query some kind of database in order to get the results later. For each token you count how often it appears for one product and then assign that as a score for this particular keyword for this product. For example a product called "yummy milk", which is in a category of products called "milk products" you assign a score of 2 for the keyword "milk" for the product "yummy milk". Bonus: weight your score by assigning a higher score if a keyword appears in certain fields (e.g. increase the score by 2 if the keyword appears in the name of the product and only increase by 1 if it appears in the category).


Now we have lots of keywords per product, but what if the user searches for something we don't have a keyword for or if he mistypes his query? First, we create phonetics for each keyword and store that. It's how a word is pronounced, so if the user searches for a word that sounds similar to one of our keywords we'll still be able to return a result. For German data you can use "cologne phonetics".
So what about typos like "mlik" instead of "milk"? We store each keyword with its characters sorted. Boom. Both "mlik" and "milk" will be translated to "ilkm" first so they both return the same results. Again, this can lead to false positives in some cases.

Further improvements

Other things you can do to improve your search engine (not yet implemented by us):
- scrape a word's synoms from Wiktionary and apply them during tokenization
- generate possible typos for a word by looking at the keys surrounding a character, e.g. for "i" in "milk" there is "u", "o", "j" and "k" around it on the keyboard, so we create alternative keywords: "mulk", "molk", ... you get the idea.

The final improvement that would improve your search engine: natural language processing. Someone who is searching for "milk" is probably only interested in actual milk, nothing else. No idea how to implement that, only Google knows I guess...

Just for fun, here's what our search data looks like in Google AppEngine Search:
columns: sorted characters of a keyword used as a field name. rows: score for each keyword per product

Thursday, December 31, 2015

How to get the most out of CloudFlare

What are the benefits of using CloudFlare?

Quite a few things, all of them being free to use:

Using CloudFlare: performance comparison before and after

Unfortunately blogs hosted by Blogger are designed in a way that makes it impossible for CloudFlare to fully optimize it: static resources (JavaScript, images, etc) are hosted on dozens of different domains, but CloudFlare can only optimize content hosted on your own domain. There's still a few things CloudFlare can do, most importantly: delaying JavaScript until the page is loaded, therefore speeding up the time it takes to see content on your website.

Here's the raw data:
before enabling CloudFare - WebPagetest
after enabling CloudFlare - WebPagetest

before enabling CloudFlare - PageSpeed Insights
after enabling CloudFlare - PageSpeed Insights

How to configure CloudFlare?

  1. Sign up at
  2. Follow CloudFlare setup
    1. Add your domain
    2. Make sure they imported all DNS-entries for your domain (about half of them missing in my case). Also make sure the "Status" of each entry is an orange cloud-icon. That means that all traffic is going through CloudFlare's server. Only if you enable this you'll benefit from the features offered by CloudFlare - otherwise it's just a plain DNS server.
    3. Change nameserver at your domain registrar
  3. Configure cloudflare
    1. Default settings are mostly fine, I turned down "Security Level" to "Low" because I want to avoid false positives where some of my visitors have to enter a captcha before reading my blog...
    2. Turn on "Auto Minify" for HTML, JS and CSS. There's almost no risk of breaking something, unless you're doing funky stuff in your JavaScript (which you shouldn't do anyway if that's the case).
    3. Wait for the DNS changes to kick in (1-2 days), see if everything still works fine and then give "Rocket Loader" a try. Set it to "Automatic", force-reload your website and see if everything works as expected.
    4. Create a "Page Rule" for "*" (e.g. "*") and set "Custom Caching" to "Cache everything"

Sunday, December 27, 2015

How much does improved website speed cost?

Okay, I'll admit that the title of this post sounds a little too sensational. But it fits into a row of posts I've posted before: How much does the worldwide fastest DNS server cost? and How much does a HTTPS certificate cost?. Of course improving a website's speed is a long (and painful!?) process which needs a lot of time and knowledge, but there's a few shortcuts to get you started faster. Like for the other two posts mentioned before, the solution in this case is, again, to put a server in between your users and your servers. CloudFlare's servers to be exact.

What CloudFlare does in this case is several things. Most importantly, they cache everything that is safe to cache - static images for example - so that they are downloaded from their speedy and globally distributed servers instead of your possibly slow (no way!?) servers. Other than caching images, they also optimize them (cutting off unnecessary fat) in the process of caching them. There's more that's being optimized, like shrinking HTML-, CSS- and JavaScript-files but don't leave it to me to explain all of that - go read it directly at CloudFlare.

There's other companies offering similar features of course, but CloudFlare is the one I'm using in production myself, so I can't speak for the others unfortunately.

Here's how to get the most out of CloudFlare!

Multi-module dispatcher for AppEngine Development Server

The great thing about AppEngine is that you can test and use almost all features during development locally. However there's one really important thing missing for everybody that is using modules - as suggested by the official documentation: accessing all your modules via the same port, where requests are routed according to your configuration in dispatch.xml. This feature is not available locally: "All dispatch files are ignored when running the development server. The only way to target instances is through their ports."

That makes some tasks really complicated, like sending requests from a web client. Usually you send the request to the same server hosting the resources, but with a multi-module setup that does not work locally, because the module you want to access is hosted under another port than the one serving your web client's resources. At first we adapted our code accordingly, which included a really complicated Grunt-configuration to change server URLs in JavaScript accordingly. Eventually we hit a roadblock because cookies set by one module where not visible for requests sent to another module (because they act as different hostnames). The solution we came up with is a proxy server in front of your modules which routes requests like AppEngine does when deployed.

In order to achieve this, we're using some grunt-plugins:

Now call "grunt dispatch" and your server is ready to go at localhost:9000. All requests with a URL matching "/api" (e.g. "localhost:9000/api/hello") are routed to localhost:8081 and all the rest goes to localhost:8080. As you can see this does not read your actual configuration from dispatch.xml, but it's really easy to set up anyway (same logic applies, just JSON instead of XML).

The same applies to Python users with dispatch.yaml of course...