# Celery messages TTL. Make sure your queue is never overflown

Posted:

Sometimes we have quite huge boosts of messages rushing to our Sentry error logger. Providing that Sentry generates a couple of Celery tasks for every incoming message, it periodicaly overflows our RabbitMQ queue.

We don't mind to lose some messages as long as we have responsive logger, handling the rest of our tasks.

The solution is quite easy, but it took some time for me to come to a decision. I added the argument "x-message-ttl" to the queue to ensure it doesn't stuck. My settings.py file comes to look like this:

CELERY_QUEUES = (
Queue('default', routing_key='default', queue_arguments={'x-message-ttl': 600}),
Queue('celery', routing_key='celery', queue_arguments={'x-message-ttl': 600}),
Queue('cleanup', routing_key='cleanup', queue_arguments={'x-message-ttl': 600}),
Queue('sourcemaps', routing_key='sourcemaps', queue_arguments={'x-message-ttl': 600}),
Queue('search', routing_key='search', queue_arguments={'x-message-ttl': 600}),
Queue('counters', routing_key='counters', queue_arguments={'x-message-ttl': 600}),
Queue('events', routing_key='events', queue_arguments={'x-message-ttl': 600}),
Queue('triggers', routing_key='triggers', queue_arguments={'x-message-ttl': 600}),
)


In short, this means that no messages will last longer than 10 minutes in a queue. Stale messages will be quietly removed.

Some notes on top of that:

1. Adding arguments alone isn't enough if your queues have been already created. With every change of queue_arguments you should re-create your queues. Actually, all you have to do is to remove them, and Celery creates new ones upon the next start up. I use RabbitMQ Management Plugin for this. By the way, it works with RabbitMQ 3.x only, and don't forget to remove the default "guest" RabbitMQ user, if you install it!
2. I've got the list of Celery queues, used by Sentry, from the sentry.conf.server file (this is its latest version). Ensure you don't forget any queues.
3. Try undocumented settings parameter SENTRY_USE_SEARCH = False to reduce the number of tasks in your queue. Sentry does nasty things when this option is turned on (proof)

Happy logging!

# Celery for Internal API in SOA infrastructure

Posted:

I should admit that I'm very slow. PyCon Russia in Yekaterinburg has ended almost a month ago, and only now I manage to publish the transcript of my talk.

Anyway, presentation from Slideshare is here.

Talk transcript is below.

# Introducing "resources". A fixture lifecycle management library for your tests

Posted:

It's a short introduction to a project we have made recently here at Doist Inc to improve our test codebase. The project is named resources and it's not available on PyPI, but you can install it right from the github anyway.

pip install -e git+git@github.com:Doist/resources.git#egg=resources-dev


What's the point of the project?

The idea is to provide a yet another way to manage your test fixtures (that is, objects and other resources you create usually before you start verifying an assertion.)

There are two popular ways to initialize fixtures in Python I'm aware of:

• xUnit-style with setup/teardown methods. It is supported by the majority of frameworks and looks like the most universal way to initialize testing environment. Yet it's somewhat verbose and makes you either repeat yourself, or develop extensive set of helper functions, or hierarchy of test classes, or invent something else to ensure you keep DRY, and your tests are readable and manageable.
• py.test-style, when you inject dependencies in a test functions by passing parameters in it. The py.test magic instantiates objects for you and calls a test function by passing them as arguments. It's good, because it's reusable and granular, but it's not very flexible: there is no easy way to pass parameters to a py.test fixture function.

Now, there is another way to make the same. The approach we propose is to create fixtures roughly the way we did it for py.test -- a function per fixture, and to use it the way how the Michael Foord's mock manages the lifespan of patches it applies -- with context managers, function decorators or start/stop methods.

So, without further ado, a short usage example which explains better than thousands of words. The library should work with py.test, nose or unittest.

# import global instance
from resources import resources

# register resource named "user" by defining a function with the same
# name. The function must have exactly one "yield" construction and is
# used both to set up and tear down the fixtures

@resources.register_func
try:
yield user
finally:
user.delete()

# use resource with an automatically created "user_ctx" context manager
def test_user_properties():
with resources.user_ctx() as user:
# the resource will be available as an assignment target
# of the "with"-construction