About Data reader protocols

Board's Data reader protocol (also called simply Data reader) allows you to import data into a Board Data model from text files or from ODBC, OLE DB, and other external data sources using Board's robust set of connectors. A Data reader protocol defines how the external data is to be imported into the corresponding Board Data model and how it is mapped to Entities and Cubes: in other words, it determines which fields of a file or a table in relational database should be fed into which Entities and Cubes.

A Data reader protocol can feed multiple Entities and Cubes at the same time and, similarly, Cubes and Entities can be loaded from multiple Data Reader protocols.

For example, if you need to feed two Cubes, Sales Amount and Sales Quantity, with data that reside in the same table of your transactional system, then you can create a unique Data reader protocol that loads both Cubes at the same time.
On the other hand, if you need to feed a Cube from two transactional systems, you can create two protocols which load data into the same Cube but connect to a different data source. This is useful, for example, when you need to consolidate data from multiple subsidiaries that have separate ERP systems.

A Data reader protocol can include transformation formulas and validation rules which are automatically applied to the incoming data. These rules and formulas are defined using the ETL section of the Data reader configuration area.

Data reader protocols can be launched from a Procedure or directly from the Data reader section of the Data model. Usually, Board Data models are updated daily with a scheduled overnight process that runs all required reading protocols.

It is important to note that a Data reader reads records of data in parallel of each other to execute faster. This means that records are not read in sequential order. If a dataset has duplicate records, you cannot expect the last recorded value to be stored in Board, which is why it is not a best practice technique to include duplicate records.

The mapping and ETL sections of the Data reader are two of Board's data ingestion core capabilities, which translate into tangible savings while implementing a project. With the power and flexibility of the Data reader, in most cases you will be able to feed your Board data models directly from the source system (such as an ERP, CRM or other operational tools) without the need for intermediate data staging layers, such as a data-mart or a data-warehouse. This is unique in comparison to most other planning platforms, which typically require the source data to be cleansed and organized in either a star or snowflake schema: this process can be a significant cost during implementation, but it is often overlooked.

 

To access the Data reader section of a Data model, access the designer space of the desired Data model and click on the Data reader tile.

In the Data reader page, you can see all existing Data reader protocols in the Data model and their main information: the table is sortable and searchable using the interactive header fields. You can also show or hide columns to your liking, by clicking the Column chooser button in the upper right corner of the table.

You can perform different actions on one or more Data readers by selecting them and by clicking on the different buttons above the list. See Running and managing Data reader protocols for more details.

 

When your Data model includes a large amount of Data readers it might be difficult locating a specific one or understanding the purpose of all Data readers listed in the table. In this case, we strongly recommend that you logically group them using the Group cell in each row. 

Groups are not part of the multidimensional Data model (i.e. they cannot be used in Procedures): the only purpose of groups is to improve reading and searching through the list of Data readers.

 

The Data reader section of each Data model in Board allows you to:

  • Create, delete, copy and run Data readers
  • Edit any of the three main configuration steps of each Data reader (Source, Mapping, and ETL)
  • Enable event logging for specific Data readers.  This option creates a log file with all discarded records. If no record is discarded, then no log file is created. However, the general Data model log file always contains a log line related to each Data reader execution.
    The log file with the rejected records is created in the following locations:
    • For Text file Data readers, in the same path where the source text file is located
    • For SQL Data readers, in the "Other logs" folder under the "LOGS" section in the Cloud Administration Portal. The log file name includes the Data reader name, the Data reader ID and the timestamp. For example, SourceData_025_202208.log
    • For SAP Data readers, in the "Other logs" folder under the "LOGS" section in the Cloud Administration Portal. The log file name includes the Data model name, the Data reader ID and the timestamp. For example, Production_9958_202208.log
  • Enable the "Stop on error" option for specific Data readers. This option causes the Data Reader protocol to stop in the event a record is rejected or discarded