How do we generate insight and understanding from a multitude of market data?

Drawing conclusions from market data is difficult. We think we have the tiger by the tail! There is a crush of data, over many products, and many exchanges dispersed over time. It is easy to drown in this data. With strategy and planning, we tamed the data by dividing the effort into two problems: storage and calculation.

At the outset our aim was a system where database and calculation requirements are discretely seperated. This allows us to approach both problems with full flexibility. Here are some important design features we took into account in building

    Database Requirements:
  • Hierarchical Options are organized in a tree like fashion. The database must recognize this reality or else have sluggish performance. Our custom built database is extremely efficient. We can retrieve and render volatilities instantaneously from gigabytes of data.

  • Time Options and futures are characterized by a lifecycle. They come into existance and expire. Our highly optimized database takes into consideration the lifecycles so it will not look for options which do not exist.

  • Versioning We built versioning into our database from the beginning. Every volatility collection or curve is tagged and retained. Model inputs, like interest rates or dividends can change. Any change will trigger a recalculation and our system keeps track of it all. Versioning enables backtesting with full confidence that there is no forward bias in our results.

    Calculation Requirements:
  • Normalization We marshall data from many sources so you don't have to. Our process constructs a normalized view of the exchanges' data no matter how it was initially formatted.

  • Model Applicability A calibration process needs to be able to quickly and correctly match the appropriate model to the option.

  • Speed End of day data is ready for consumption before the next day market open.