Frequently Asked Questions

Models and libraries

  • TRAC can orchestrate models in Python, Spark, SQL, Java and Scala. A model coded in one of these languages will run on TRAC so long as it contains a custom function which declares its schema to the platform. Existing model code can be 'wrapped' inside this function. Further details on the Model API and tutorials on building and wrapping models is found on our external documentation site.

  • Yes. TRAC can use external model libraries, although the TRAC Guarantee will be impacted by the versioning capabilities of those libraries.

  • No. TRAC is a universal model orchestration solution into which you can deploy and manage your own models. We are not, at this moment, providing models as part of the deployment.

  • No. For technical reasons, the TRAC Guarantee could not be realised for SAS models. They would need to be translated into one of the supported open languages (Python, R, Spark, Java, SQL, Scala).

Performance and security

  • TRAC is deployed into your data infrastructure (cloud, on-premise or hybrid) so no information leaves your data security perimeter. All we require is a zone within that environment where the TRAC application controls write access.

  • TRAC orchestrates the execution of Jobs using your nominated compute infrastructure. Compute resource is ring-fenced for Jobs which have the 'priority tag' assigned via the Policy Service, so they are insulated from the platform's general analytical workload.

  • TRAC is only the orchestration service, so this depends on the compute infrastructure which you provide, although the platform supports distributed Spark computation so large data volumes are not typically a challenge.

  • Counterintuitively we find that TRAC reduces overall data volumes. Because the TRAC Guarantee means that every result can be re-generated at will, there is much less requirement to retain intermediate results and the output of analytical Jobs. Although input data is immutable, for output data sets the data retention policy can be configured via the Policy Service.

Use cases and alternatives

  • Any analytical process can be run on the platform but the platforms central features are designed to help manage structurally complex calculations which are subject to high levels of analytical scrutiny and robust governance frameworks. Models and calculations which inform accounting, regulatory disclosures or strategic decisions are strong candidates.

  • No. TRAC is primarily a platform for model use. The TRAC runtime can be deployed into an IDE of your choosing, which allows you to build models which will translate to production with a single click. However, this would be just one of the development tools in the IDE.

  • Any model which can be executed as Spark or Pandas can be managed and orchestrated on TRAC, including AI/ML models. However, TRAC is mainly aimed at orchestrating structurally complex calculation and is not a tool for training AI/ML models.

  • Yes. The platform itself does not care about the content of the models so it could be used in any number of business domains. If you have a specific non-Finance use-case in mind, please let us know.

Licensing and costs

  • Yes. The core platform services are provided on a permissive license via the FINOS foundation. There is also a free-to-use packaged deployment which is a great way to get started with TRAC.

  • You can contact us on sales@fintrac.co.uk and we would be happy to provide more information, or submit a specific enquiry via the contact us page.

  • We have two commercially supported versions of the platform, with slightly different feature bundles. Both packages are priced per number of asset-generating users (model developers and model users) with no cost for admin or reviewer licenses. The two versions of the product are summarised on the packages page and you can contact us for more details.