Celery messages TTL. Make sure your queue is never overflown


Sometimes we have quite huge boosts of messages rushing to our Sentry error logger. Providing that Sentry generates a couple of Celery tasks for every incoming message, it periodicaly overflows our RabbitMQ queue.

We don't mind to lose some messages as long as we have responsive logger, handling the rest of our tasks.

The solution is quite easy, but it took some time for me to come to a decision. I added the argument "x-message-ttl" to the queue to ensure it doesn't stuck. My settings.py file comes to look like this:

    Queue('default', routing_key='default', queue_arguments={'x-message-ttl': 600}),
    Queue('celery', routing_key='celery', queue_arguments={'x-message-ttl': 600}),
    Queue('alerts', routing_key='alerts', queue_arguments={'x-message-ttl': 600}),
    Queue('cleanup', routing_key='cleanup', queue_arguments={'x-message-ttl': 600}),
    Queue('sourcemaps', routing_key='sourcemaps', queue_arguments={'x-message-ttl': 600}),
    Queue('search', routing_key='search', queue_arguments={'x-message-ttl': 600}),
    Queue('counters', routing_key='counters', queue_arguments={'x-message-ttl': 600}),
    Queue('events', routing_key='events', queue_arguments={'x-message-ttl': 600}),
    Queue('triggers', routing_key='triggers', queue_arguments={'x-message-ttl': 600}),

In short, this means that no messages will last longer than 10 minutes in a queue. Stale messages will be quietly removed.

Some notes on top of that:

  1. Adding arguments alone isn't enough if your queues have been already created. With every change of queue_arguments you should re-create your queues. Actually, all you have to do is to remove them, and Celery creates new ones upon the next start up. I use RabbitMQ Management Plugin for this. By the way, it works with RabbitMQ 3.x only, and don't forget to remove the default "guest" RabbitMQ user, if you install it!
  2. I've got the list of Celery queues, used by Sentry, from the sentry.conf.server file (this is its latest version). Ensure you don't forget any queues.
  3. Try undocumented settings parameter SENTRY_USE_SEARCH = False to reduce the number of tasks in your queue. Sentry does nasty things when this option is turned on (proof)

Happy logging!


Celery for Internal API in SOA infrastructure


I should admit that I'm very slow. PyCon Russia in Yekaterinburg has ended almost a month ago, and only now I manage to publish the transcript of my talk.

Anyway, presentation from Slideshare is here.

Talk transcript is below.

Read more…


Introducing "resources". A fixture lifecycle management library for your tests


It's a short introduction to a project we have made recently here at Doist Inc to improve our test codebase. The project is named resources and it's not available on PyPI, but you can install it right from the github anyway.

pip install -e git+git@github.com:Doist/resources.git#egg=resources-dev

What's the point of the project?

The idea is to provide a yet another way to manage your test fixtures (that is, objects and other resources you create usually before you start verifying an assertion.)

There are two popular ways to initialize fixtures in Python I'm aware of:

  • xUnit-style with setup/teardown methods. It is supported by the majority of frameworks and looks like the most universal way to initialize testing environment. Yet it's somewhat verbose and makes you either repeat yourself, or develop extensive set of helper functions, or hierarchy of test classes, or invent something else to ensure you keep DRY, and your tests are readable and manageable.
  • py.test-style, when you inject dependencies in a test functions by passing parameters in it. The py.test magic instantiates objects for you and calls a test function by passing them as arguments. It's good, because it's reusable and granular, but it's not very flexible: there is no easy way to pass parameters to a py.test fixture function.

Now, there is another way to make the same. The approach we propose is to create fixtures roughly the way we did it for py.test -- a function per fixture, and to use it the way how the Michael Foord's mock manages the lifespan of patches it applies -- with context managers, function decorators or start/stop methods.

So, without further ado, a short usage example which explains better than thousands of words. The library should work with py.test, nose or unittest.

# import global instance
from resources import resources

# register resource named "user" by defining a function with the same
# name. The function must have exactly one "yield" construction and is
# used both to set up and tear down the fixtures

def user(email='joe@example.com', password='password', username='Joe'):
    user = User.objects.create(email=email, password=password, username=username)
        yield user

# use resource with an automatically created "user_ctx" context manager
def test_user_properties():
    with resources.user_ctx() as user:
        # the resource will be available as an assignment target
        # of the "with"-construction
        assert user.username == 'Joe'
        # it's also stored in the "resources" global object
        assert resources.user.username == 'Joe'
    # instances doesn't exist and isn't accessible anymore
    assert not hasattr(resources, 'user')

It's a very basic example, though. If you feel like it may be useful for you, feel free to visit the github page and to read the README we created especially for this purpose.



Fresh soft for your Amazon AMI. Part 2. Publishing your own work.


This is the second and last part of the series "Fresh soft for your Amazon AMI". In this post aim to explain how you can re-build srpm package to a new version using mock and git, and how you can publish your own yum repository.

Feel free to address Part 1. Stealing from Fedora, where I describe how you can get the source package from Fedora and rebuild it.

Read more…


Fresh soft for your Amazon AMI. Part 1. Stealing from Fedora


I recently started using Amazon Linux AMI as the main platform for deployment. So far I used Debian and Ubuntu distributions for long time, and very soon I was disappointed how outdated some software, provided by standard Amazon Linux AMI and EPEL repositories is.

Then I decided to to find out, how easy is to build your own software for Amazon Linux AMI. Luckily, it turned out to be easier, when you don't start from scratch, but use existing packages as a leverage. Here and in the Part 2. Publishing your own work I share my experience.

These instructions should work for all RHEL-based distributions, and to build rpm packages for CentOS in particular.

Read more…


Contents © 2013 Roman Imankulov - Powered by Nikola