TREP for data management
TREP for Data Management is a flexible data management environment that delivers enterprise data services for business automation. It combines the acquisition, storage and distribution of enterprise information assets for the securities and investment industry, including reference and pricing data. TREP is the Thomson Reuters Enterprise Platform which distributes, integrates and manages information across financial institutions. TREP for Real-Time and TREP for Velocity Analytics are also part of the suite.
TREP for Data Management acquires, validates and normalises data from a range of information sources including snapshot real-time, interactive and bulk file vendor feeds as well as in-house applications and databases. The platform includes a configurable adapter and feed handler framework for the rapid development of custom application and vendor feed interfaces. It retains a full history of normalised data from all sources - the data universe. Cleansed master records (golden copies) can be derived from the data universe. One or more golden copies can be generated for specific business domains based on user-defined rules such as vendor hierarchies and price tolerances. These rules are defined in the data operations user interface by users who have been suitably authenticated and authorised. Four-eyes verification can be applied to all data content and rule changes.
- Event and exception management: provides workflow tools for managing exceptions such as price tolerance breaches, allowing users to accept, reject or manually enter a price as appropriate.
- Reporting and metrics: maintains a full audit of all user defined configuration, business rules, data changes, exceptions and authorisations. User interfaces include dashboards displaying performance metrics associated with operations, such as feed handlers and data cleansing.
- Distribution and orchestration: the data orchestration engine includes an event-driven adapter framework and Web Services API, allowing data to be routed and transformed so that it can be readily consumed by downstream applications and business processes.
- Canonical data model: built on a canonical data model that ensures consistent interpretation of data across the enterprise, enabling users and applications to share accurate, consistent and timely data assets regardless of source. This is employed in the services framework for the rapid deployment of innovative solutions for business domains such as risk management, compliance and portfolio valuation.
- Centralised data management: consolidates all the sources of data in order to create the normalised data universe. The browser-based user interface allows federated user groups to manage golden copies for specific business domains.
- Deployment scenarios: can be deployed on-site, as a hosted or a fully managed service.
- Interoperability: non-invasive and interoperable with existing IT infrastructure through the support of integration technologies including: SOAP Web Services, IBM MQ Series, Java Messaging Service, HTTPS and FTP. There are also native interfaces to complementary data distribution platforms such as TREP for Real-Time, allowing snapshot data to be acquired, cleansed and distributed.
- Enrichment: rule-based cleansing and enrichment capabilities for improving the quality of data assets to mission critical downstream applications and business processes
- Easy-to use integration capabilities; capable of readily integrating with existing IT environments, including third-party or proprietary applications and systems
- Advanced reporting and metrics
- Extensible and scalable architecture
- Open Message Model and flexible APIs make it easier to develop in-house applications
- Platform enables clients to create dynamic complex data types