LMAX Disruptor now open source

Dit artikel is afkomstig van een externe website.
Bron: http://www.davefarley.net/?p=151

As part of our work to create and ultra-high performance financial exchange we looked into a lot of different approaches to high performance computing. We came to the conclusion that a lot of the common assumptions in this area were wrong.

We have done a lot of things to make our code fast and efficient, but the single most important thing has been to develop a new approach to managing the coordinated exchange of data between threads. This has made a dramatic difference to the performance of our code. We think it sets a new benchmark for performance, beating comparable implementations that use queues to separate processing nodes, by 3 orders of magnitude in latency and by a factor of 8 in throughput. You can watch a presentation, by a couple of my colleagues, describing one of our uses of this technology here.

I am pleased to announce that we have now released this as an open-source project. There is a technical article describing the approach and providing some evidence for our claims available at the site.

We think that this is the fastest way to write code that needs to coordinate the activity of several threads and all of our experiments so far have backed this up.

Why ‘Disruptor’?

Well there are two reasons, primarily we wanted to disrupt the common assumptions in this space because we think that they are wrong. But, to be honest, we also couldn’t resist the temptation; There was some talk about Phasers in Java at the time when we named it and, for those of you too young to care, Phasers were the Federation weapon and Disruptors the Klingon equivalent in Star Trek ;-)

Disruptor – The implications on design

Dit artikel is afkomstig van een externe website.
Bron: http://www.davefarley.net/?p=154

My company, LMAX, has recently released our disruptor technology for high performance computing. You can read a Martin Fowler’s overview here.

The level of interest that we have received has been very pleasing, but there is one point that is important to me that could be lost in the understandable, and important, focus on the detail of how the Disruptor works. That is the effect on the programming model for solving regular problems.

I have worked in the field of distributed computing in one form or another for a very long time. I have written software that used file exchange, data exchange via RDBMS, Windows DDE (anyone else remember that?), COM, COM+, CORBA, EJB, RMI, MOM, SOA, ESB, Java Servlets and many other technologies. Writing distributed systems has always added a level of complexity that simply does not exist in the sort of simple, single computer, code that we all start out writing.

Our use of the Disruptor, at LMAX, is the closest to that simplicity that I have seen. One way of looking at the way that we use this technology is that it completely isolates business logic from technology dependencies. Our business logic does not know how it is stored, how it is viewed whether or not it is clustered or anything else about the infrastructure that it runs on. There are two concessions that result from the programming model imposed upon us by our, Disruptor-based, infrastructure, neither of which are onerous or even unusual in regular OO programming. Our business logic needs to expose it’s operations via an interface and it reports results through an interface.

Typically our business logic looks something like this:

1
2
3
4
public interface MyServiceOperations
{
void doSomethingUseful(final String withSomeParameters);
}

public class SomeService implements MyServiceOperations
{
private final EventChannel eventChannel;

public SomeService(EventChannel eventChannel)
{
this.eventChannel = eventChannel;
}

@Overrides
public void doSomethingUseful(final String withSomeParameters)
{
// some useful work here ending up with someResults

eventChannel.onUsefulWorkDone(someResults);
}
}

There are no real constraints on what parameters may be passed, on how many interfaces may be used to expose the logic of the service, on how many event channels are used to publish the results of the service or anything else – that is it, we need to register at least one interface to get requests into the service and at least one to publish results from it. There is also nothing special about these interfaces, they are plain old java interfaces specified by the business logic (nothing to inherit from) and then registered with our infrastructure.

Since our business logic runs on a single thread and only communicates with the outside world via these interfaces it can be as clean as we like. Our models are fully stateful, rich object oriented implementations of models that address the problem that we are trying to solve. Sometimes our models are not as good as they could be, but that is not because of any technology imposition it is because our modelling wasn’t good enough!

Of course it is not quite that simple, mechanical sympathy is still important in that we need to separate our services intelligently so that they are not tightly-coupled, but this too is only about good OO design and a focus on a decent separation of concerns at the level of our services as well as at the more detailed level of our fine-grained object models. This is the cleanest, most uncluttered, approach to distributed programming that I have ever seen – by far. It feels like a liberation. I am very proud of our infrastructural technology, but for me the real win is not how clever our infrastructure is, but the degree to which it has freed us to concentrate on the business problems that we are paid to solve.