Loading data

Before you can add variable data fields to a template in the Designer, you need to have a Data Model and a sample of customer data. At the design stage the Designer doesn't need to have access to all data; it just needs to know which data fields exist in your data and it needs some data to be able to display a preview of the output.

To get access to a Data Model and data, you can open:

When you open a data file or a database, the Data Model will be derived from it. That is, unless there already is an open data mapping configuration; in that case, the current data mapping configuration will try to retrieve data from the file or database, using its own Data Model and extraction logic.

After opening a data mapping configuration, data file or database, the Data Model pane at the right hand bottom shows the data fields that occur in the data.

The Value column displays data from the first record in the data file. Use the First, Previous, Next and Last buttons to browse through the records, or use the Page Up, Page Down, Home and End keys.

Note that when data is loaded directly from a file or database, all values are strings.

Designing a template without sample data

Strictly speaking, you don't need sample data to design a template; the only thing you really need in order to add variable data fields to a template is a Data Model.
You can open a Data Model without data by importing a Data Model file, a JSON file or a Connect template file from within the Data Model pane, using the top-right icon: . The file's data model structure will be displayed in the Data Model pane, but the data is not included.
You can also add fields and tables and set default values for fields in the Data Model pane; see Changing the Data Model. This way you could even build a Data Model in the Designer from scratch.
However, without sample data you won't be able to preview the template with actual data in the Designer.

It isn't possible to enter values directly in the Data Model pane, but there is a workaround: click the JSON Sample Data toolbar button on the Data Model pane. You will see a JSON string that represents a single record based on the current Data Model, with dummy values (like empty strings and zeros). Edit that JSON string and click OK to insert the data in the Data Model.

Generally, the best way to access data is by creating a data mapping configuration. With a data mapping configuration you can, among other things:

  • Use Workflow to automate the extraction of data from this kind of data file.
  • Load transactional or structured data. If there are detail lines, transactions, or any variable number of items to put into the template, you need a data mapping configuration to extract them.
  • Format, transform, conditionally include/exclude and enhance data from the source file.
  • Use other field types. When loaded from a file or database, all fields contain strings.
  • Use the same data file with different templates, or use different kinds of data files with the same template.

If you have no data at hand, download a demo from https://demo.objectiflune.com and open a dummy data file to test with.

Loading a data mapping configuration

If you already have an open data mapping configuration, its Data Model and sample data will automatically be used when you start creating a template. You might have to click the Synchronize Model button on the Data Model pane, to update the fields.

To open a data mapping configuration:

  1. Open the Welcome screen: click the Home icon at the top right or select Help > Welcome on the menu.
  2. Click Open File.
  3. Select the data mapping configuration and open it.
  4. At the top of the workspace, click the tab with the name of the template's section to go back to the template.
  5. Click the button Synchronize model at the top of the Data Model pane to reload the data model.
The ExtraData field that appears as the first field in each record and in each detail table is automatically added to the Data Model by the DataMapper. It offers the possibility to add extra data to existing data records in a Workflow process.
The ExtraData field can be used in a template just like any other data field (see Variable Data). When it contains a JSON string, this value can be read with a script using JSON.parse().
When generating output with just a data mapping configuration, the template is merged with the complete sample data file that is part of the data mapping configuration. The output is not limited to the number of records shown in the Data Model pane (which is one of the settings in the DataMapper).

Adding data from a data file

  1. Click File, select Add Data and then click From file data source. Browse to the location of the file and select it.
    The Designer can open the following types of data files:
    • Tabular files: CSV files (.csv, .txt) and Excel files (.xls, .xlsx)
      Excel files saved in "Strict Open XML" format are not supported yet.
    • Microsoft Access Database (.mdb, .accddb)
    • JSON files (JSON)
    • XML files (.XML)
    • PDF/VT files
  2. Review the options presented, to ensure that the data will be interpreted correctly. The options available depend on the type of data file (see below).

Excel (XLS/XLSX) file options

  • First row contains field names: Check this option to use the first line of the Excel file as headers. This option automatically names all extracted fields.
  • Sheet: Only one sheet can be selected as the data source.
    • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

CSV file options

  • Encoding: The Designer can not infer from a CSV file what encoding it is in. The default is right in the large majority of cases, but when it isn't, it can be very difficult to figure out the correct encoding. Ask your source what the encoding of the file is.
  • Field separator: Choose the character that separates the fields in the file.
  • Comment delimiter: If there are comment lines in the file, type the character that starts a comment line.
  • Text Delimiter: Type the character that surrounds text fields in the file. Other delimiters will not be interpreted within these text delimiters.
  • Ignore unparsable lines: When checked, any line that does not correspond to the above settings will be ignored.
  • First row contains field names: Check this option to use the first line of the CSV as headers. This option automatically names all extracted fields.
    • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

MDB file options

  • File: Include the full path to the file.
  • Password: If the file isn't password protected, you can click Next without filling out this field.
  • Table name: Use the drop-down to select the appropriate table or stored query to retrieve the appropriate data set.
  • Encoding: Use the drop-down to select the encoding with which to read the data in the table.
    • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

JSON file options

After selecting a JSON file, specify if and how the JSON file must be split into multiple records.
This is done by selecting an object or array as parent element. Its direct child elements - objects and arrays, not key-value pairs - can be seen as individual source records. If the root is selected, there will be only one source record.

Whether source records are output as individual records depends on the trigger. Either:

  • Select On element to create a new record in the output for each object or array in the parent element.

  • Select On change to create a new record each time the value in a certain key-value pair changes. Only key-value pairs that exist at the root of a child element can be evaluated.

All data found in child elements of the selected parent element are extracted to fields at the root of the Data Model. If a value consists of an object or array, the entire object or array is extracted to one data field.

Field names are derived from keys, objects and arrays in the first record, but those aren't necessarily the same in a subsequent record. If following records have a different structure, for example if a record has more child elements compared to the first record, some data may not get extracted.

JSON data can also be imported directly into the Data Model; see Adding JSON sample data.

XML file options

Select what level of XML elements defines a record.

The Trigger is what triggers the creation of a new record. It can be set to:

  • On element: This defines a new record when a new element occurs on the selected XML level.
  • On change: This defines a new record when a specific field under the chosen XML level has a new value. After selecting this option, you have to select the field that triggers the creation of a new record.

PDF/VT file options

After selecting a file, use the drop-down to select what level in the PDF/VT file defines a record in your data. The names of the levels are taken from the PDF/VT file itself. (See About PDF/VT files.)
All metadata fields that belong to the chosen level and levels higher up in the tree structure will be listed. The lower the chosen level is in the tree structure, the more records you will get and the more metadata fields will appear in the list.
Select metadata fields to add them to your data. Their property names will be used as field names in the Designer's data model.

About PDF/VT files

The pages in PDF/VT files can be grouped on several levels. PDF/VT files can have a variable number of levels in their tree structure. The level's names are variable as well, with the exception of the lowest level, which is always called the page level. Metadata can be attached to each level in the structure.

AFP file options

After selecting a file, use the drop-down to select what level in the AFP file defines a record in your data. The levels are defined in the AFP file itself. (See About AFP files.)
All metadata fields that belong to the chosen level and higher levels in the tree structure will be listed. The lower the chosen level is in the tree structure, the more records you will get and the more metadata fields will appear in the list.
Select metadata fields to add them to your data. Their property names will be used as field names in the Designer's data model.

About AFP files

Pages in AFP files are arranged in a tree structure, comprising one document at the top of the structure, pages at the bottom of the structure and one or more levels of page groups in between. (Unlike in PDF/VT files, the names of levels in AFP files can not be chosen freely.) In other words, an AFP file always consists of one document, that can contain page groups, of which each can be divided in page groups, and so on; the page groups at the lowest level contain pages.
Metadata can be attached to each level in the structure.

Adding data from a database

  1. Click File, select Add Data and then click From database data source. Browse to the location of the file and select it.
    The Designer can open databases from the following types of data sources:
    • MySQL
    • Microsoft Access Database (.mdb, .accddb)
    • SQL Server
    • ODBC DataSource
    • JDBC
    • Oracle.
  2. Review the options presented. The options available depend on the type of database data source; see below.

MySQL

  1. Enter the appropriate information to connect to the database:
    • Server: Enter the server address for the MySQL database.
    • Port: Enter the port to communicate with the MySQL server. The default port is 3306.
    • Database name: Enter the exact name of the database from where the data should be extracted.
    • User name: Enter a user name that has access to the MySQL server and specified database. The user only requires Read access to the database.
    • Password: Enter the password that matches the username above.
  2. Click Next and enter the information for the source table.
    • Connection string: Displays the full path to the database.
    • Table: Use the drop-down to select the appropriate table or stored query to retrieve the appropriate data set.
    • Encoding: Use the drop-down to select the encoding with which to read the data in the table.
      • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

  3. Click Finish to open the database.

Microsoft Access

  1. Enter the appropriate information to connect to the database:
    • File name: Browse to your Microsoft Access database file (.mdb)
    • Password: Enter a password if one is required.
  2. Click Next and enter the information for the source table.
    • Connection string: Displays the full path to the database.
    • Table: Use the drop-down to select the appropriate table or stored query to retrieve the appropriate data set.
    • Encoding: Use the drop-down to select the encoding with which to read the data in the table.
      • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

  3. Click Finish to open the database.

SQL Server

  1. Enter the appropriate information to connect to the database:
    • Server: Enter the server address for the SQLServer database.
    • Port: Enter the port to communicate with the SQLServer. The default port is 1433.
    • Database name: Enter the exact name of the database from where the data should be extracted.
    • User name: Enter a user name that has access to the SQLServer and specified database. The user only requires Read access to the database.
    • Password: Enter the password that matches the user name above.
  2. Click Next and enter the information for the source table.
    • Connection string: Displays the full path to the database.
    • Table: Use the drop-down to select the appropriate table or stored query to retrieve the appropriate data set.
    • Encoding: Use the drop-down to select the encoding with which to read the data in the table.
      • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

  3. Click Finish to open the database.

ODBC DataSource

  1. Select the ODBC system data source. Note: Only 32-bit data sources are currently shown in this dialog, even if your system is 64-bits.
  2. Click Next and enter the information for the source table.
    • Connection string: Displays the full path to the database.
    • Table: Use the drop-down to select the appropriate table or stored query to retrieve the appropriate data set.
    • Encoding: Use the drop-down to select the encoding with which to read the data in the table.
      • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

  3. Click Finish to open the database

JDBC

  1. Enter the appropriate information to connect to the database:
    • JDBC Driver: Use the drop-down to select which JDBC Driver to use for the database connection.
    • JAR file path: Enter a path to the JAR file that contains the appropriate driver for the database below.
    • Server: Enter the server address for the database server.
    • Port: Enter the port to communicate with the server.
    • Database name: Enter the exact name of the database from where the data should be extracted.
    • User name: Enter a username that has access to the server and specified database. The user only requires Read access to the database.
    • Password: Enter the password that matches the username above.
    • Advanced mode: check to enable the Connection String to manually enter the database connection string.
    • Connection string: Type or copy in your connection string.
  2. Click Next and enter the information for the source table.
    • Connection string: Displays the full path to the database.
    • Table: Use the drop-down to select the appropriate table or stored query to retrieve the appropriate data set.
    • Encoding: Use the drop-down to select the encoding with which to read the data in the table.
      • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

  3. Click Finish to open the database.

Oracle

  1. Enter the appropriate information to connect to the database:
    • Server: Enter the server address for the Oracle database.
    • Port: Enter the port to communicate with the Oracle server.
    • Database name: Enter the exact name of the database from where the data should be extracted.
    • User name: Enter a username that has access to the Oracle server and specified database. The user only requires Read access to the database.
    • Password: Enter the password that matches the username above.
  2. Click Next and enter the information for the source table.
    • Connection string: Displays the full path to the database.
    • Table: Use the drop-down to select the appropriate table or stored query to retrieve the appropriate data set.
    • Encoding: Use the drop-down to select the encoding with which to read the data in the table.
      • Sort on: Select a field on which to sort the data, in ascending (A-Z) or descending (Z-A) order. Note that sorting is always textual. Even if the selected column has numbers, it will be sorted as a text.

  3. Click Finish to open the database.

After adding data from a database, the Data Model pane at the right hand bottom shows the data fields that occur in the data.

The Value column displays data from the first record in the data file. Use the First, Previous, Next and Last buttons to browse through the records.

Adding JSON sample data

JSON data can either be added to, or replace the Data Model in a template.

Importing JSON data into the Data Model makes it easier to test templates that are meant to be merged with JSON data in a Workflow configuration.

Workflow's OL Connect Create Content tasks can use JSON as their data source; see Create Email Content, Create Print Content, and Create Web Content.)

To add JSON sample data to the Data Model:

  1. Select File > Add Data > JSON sample data, from the menu. Alternatively, click the JSON Sample Data toolbar button on the Data Model pane.
  2. Either browse to the location of the file and select it, or paste or write the JSON directly in the box below the file name.
  3. If a data field contains JSON data you could use that data as sample data: right-click the field, select Copy to copy the JSON data to the clipboard; then open the JSON Sample Data dialog and paste the data there.

    You can add the following types of JSON data:

    • A JSON object or an array of JSON objects representing records. The data type is derived from the data:
      • Any value surrounded with quotes is converted to a field of type String.
      • Any numeric value containing a period is converted to a field of type Float.
      • Any numeric value that does not contain a period is converted to a field of type Integer.
      • A value "true" or "false" is converted to a field of type Boolean.

      If a value in a record object is a JSON object, it is considered to be a nested table with detail records.

    • Typed JSON. This JSON follows the structure of a JSON Record Data List (see the REST API Cookbook). Field types are determined by the schema object in the JSON.
  4. Review the JSON; you may edit it if you like.
  5. Select the Replace Data Model option if you want the JSON to replace the existing Data Model. Otherwise, the JSON data will be mapped to corresponding fields in the existing Data Model, and data that cannot be mapped to any field will be discarded.
    If a data mapping configuration is open at the same time, the loaded JSON will always replace the Data Model and no values will be imported.
  6. Click Finish.

Add a counter using the Generate Counter Wizard

Generating a counter is useful for numbered tickets or any other template requiring sequential numbers but no variable data.

The Generate Counter Wizard creates a record set with a Counter field and in that field, the current counter value for each record. The Counter starts and stops at set values and is incremented by a set value as well.

  1. To open the Generate Counter Wizard, select File > Add data > Generate counters.
  2. Adjust the settings:
    • Starting value: The starting number for the counter. Defaults to 1.
    • Increment value: The value by which to increment the counter for each record. For example, an increment value of 3 and starting value of 1 would give the counter values of 1, 4, 7, 10, [...]
    • Number of records: The total number of counter records to generate. This is not the end value but rather the total number of actual records to generate.
    • Padding character: Which character to add if the counter's value is smaller than the width.
    • Width: The number of digits the counter will have (prefix and suffix not included). If the width is larger than the current counter value, the padding character will be used on the left of the counter value, until the width is equal to the set value. For example for a counter value of "15", a width of "4" and padding character of "0", the value will become "0015".
    • Prefix: String to add before the counter, for example, adding # to get #00001. The prefix length is not counted in the width.
    • Suffix: String to add after the counter. The suffix length is not counted in the width.
  3. Click Finish to generate the Counter record set.
While the Generate Counter script is really useful for things like raffle tickets, it's unusable in combination with a data file or database, as it cannot complement that data automatically. This can only be done with a script. A script that adds a counter to data, using the current record index to calculate the current counter value, can be found in this how-to: Manual counter in designer.