Data Sources
A data source is connected via FTP to parse the content of a text file. Once configured it will generate mappings based on the labels in the file header. There are several settings that need to be set to process the file correctly.
Parser settings

1. FTP server
FTP server to collect data from.
2. Folder path
The folder path on the FTP server to collect files from.
3. File match
Only filenames matched by the file match pattern will be read by the data source. If the file match is blank, the data source will read all files in the folder. Partial matches are also valid. For example, if the file in the FTP folder is called data-123-abc.csv, the following file matches are all valid: data-123, 123-abc, .csv.
Advanced pattern syntax:
*matches any sequence of characters.?matches any single character.[character-range]matches characters in the character-range within the square brackets.[^character-range]matches characters that are not in the character-range within the square brackets.cmatches character c (c cannot be:*,?,\,[).\c matches character c.
Example filename: data-123-abc.csv
Example valid matches: 123, data-*.csv, data-???-abc, data-[0-9][0-9][0-9]-[a-z][a-z][a-z].csv, data-[0-9][0-9][0-9], data-[^a-z][^a-z][^a-z]
4. Delimiter
The delimiter used to separate the data in the file. This will often be a comma in a CSV file.
5. Datetime format
The format of the datetime in the datetime column. For example, YYYY-MM-DD HH:mm:ss would parse 2023-12-24 23:59:00.
Datetime format tokens:
Year: YY (e.g. 70 71 … 29 30), YYYY (e.g. 1970 … 2030)
Month: M (1–12), MM (01–12), MMM (Jan … Dec), MMMM (January … December)
Day: D (1–31), DD (01–31)
Hour: H (0–23), HH (00–23), h (1–12), hh (01–12)
Minute: m (0–59), mm (00–59)
Second: s (0–59), ss (00–59)
Fractional second: S, SS, SSS (0–999)
AM/PM: A (AM PM), a (am pm)
Time zone: ZZ (e.g. -07:00), ZZZ (e.g. -0700), z or zz (e.g. EST, PST)
Unix timestamp: X (seconds), x (milliseconds)
6. Timezone
Timezone of the data in the file. This can differ from the project timezone.
7. Datetime / Labels / Data
The column in the file with the datetime, the row in the header of the file with the labels of what is in the file, and the row in the file where the data starts. See File mapping.
8. Collection interval
The collection interval in minutes.
9. Out of date period
If no data is collected in this period, the data parser is flagged as OVERDUE.
10. Last reading datetime
The most recent reading collected for the parser. See also Reprocessing data.
11. Last file datetime
The most recent datetime of the file collected for the parser (when the file was edited or updated, not the data within the file). When a file is moved to an FTP server this datetime will be set at that point. See also Reprocessing data.
File mapping
The file mapping is done after the file parser is set up. The system will connect to the server and pull an example of the first file it can find. This will confirm that the setup has been done correctly and the file you want to parse is collected.
The column of the datetime, row of the labels and the row in which the data starts after the header are selected.
Column
Column format maps each column to a unique sensor input.
Row
Row format includes the sensor in each row which is used to determine which sensor to map each column input to.

Sensor mapping
Mappings link data to the input of a sensor. When accessed from within a file parser you only have access to the data from that parser.
These can also be configured on individual sensors, which also allows for the data to be connected from other parsers and sensors.

Logs
The data source log can be used to see what data has been collected and whether there have been any issues.

Reprocessing data
On the data source, we store the timestamp of the last file processed and the last reading in that file. This prevents redundant data processing during each collection. These timestamps can be reset or adjusted to specific dates to reprocess old data. A reset icon is available for each timestamp, and there is a "Run" button at the bottom to initiate a manual collection. You can view the number of files and readings processed in the logs tab.

Troubleshooting
For symptom→cause→fix guidance on OVERDUE, missing or wrong data, reprocessing, and logs, see Troubleshooting.