groups a number of general import properties
Definition of the tolerance for importing time values to cardinal time steps in the
series to be imported to. Tolerance is defined per location/parameter or per parameter combination. Multiple entries may
exist
Defines a delay to apply during import of time series. A positive value will cause a delay of the timestamps before the data is imported.
Define the shift of the ExternalForecastStartTime according to the time of the first value in the import array.
Available since Delft-FEWS version 2010.02. These properties are passed to the time series parser that is used for this import. Some (external third party) parsers need these additional properties. See documentation of the (external third party) parser you are using.
Only the specified time series are imported others are skipped. When no sets are specified, all time series that can be mapped with the id map are imported.
Since 2013.01. FEWS-7379. The time series are imported as temporary. When true it is not necessary to add the locations/parameters to the locations.xml and parameters.xml
Since 2014.01.
Since 2014.01.
Since 2021.01
This is a boolean flag indicating if the value is an accumulation. This is the case in some forecast grids where the precipitation in each cell increases through the forecast.
The import module will then calculate the real value of the time step by looking at the difference with the previous time step.Only available for forecast time series
This is a boolean flag indicating if the value is an accumulated mean. The import module will then calculate the real value of the time step by looking at the difference with the previous time step. Only available for forecast time series
Deprecated. Use gribTimeSeriesReader configuration
Choice between standard import types (specified by enumeration) or another import types.
This type specifies which reader should be used to read the file. The type must be one from the enumeration
This type specifies which reader should be used to read the file. It may be any string as long as this type is supported by the TimeSeriesImport module
Fully qualifying name of a Java class that implements a time series parser
interface.
Directory with jar files and optionally native DLLls. When not specified the
bin dir and class loader of FEWS is used.
When specified the java class is executed in a private class loader, it will not use any
jar in the FEWS bin dir. Only one class loader is created per binDir
, adapters should still not use static variables
Name of dataset file containing the binaries located in the 'binDir'. Use this to update 'binDir' through configuration changes.
Folder in which import files are located, deep scan, sub folder are also imported
Since 2017.01. Filter out files and sub dirs that are not part of the list. List of date times is created by specifying a loop.
e.g *.xml to skip non xml files. Only the * and ? wild cards are recognized.
Use ? to indicate the position of the ensemble member index in the filename. For example cosmo_???.dat Litteral parts that contain unrelated characters can be also replaced with any other character. For example xxxxxx???.dat This is useful if the filename keeps changing , for example because the of presence of a date in the filename
'IMAGE_'yyyyMMdd_HHmmss'.jpg'
This will overrule the observation time stored in the file, some grid formats don't contain the
time at all, so for these files the pattern is required
Put the literal parts of the pattern between '
'IMAGE_'yyyyMMdd_HHmmss'.jpg'
This will overrule the forecast time stored in the file, some grid formats don't contain the
forecast time at all, so for these files the pattern is required.
Put the literal parts of the pattern between '. When the file name also contains non forecast date times
put this parts between ' and use ? wildcard
All forecast with the same forecast time belong to the same forecast.
When filename starts with the pattern do not use quotes in at the start: yyyyMMdd'_bla.nc'
'hd_'yyyyMMdd_HHmmss'.zip'
This will overrule the forecast time stored in the file name, and uses the timestamp given in the .zip folder name.
Put the literal parts of the pattern between '. When the .zip file name also contains non forecast date times
put this parts between ' and use ? wildcard
All forecast with the same forecast time belong to the same forecast.
Regular Expression. When a match of the pattern in the filename is found, this will overrule
the location Id for the time series being imported.
A simple pattern is (without quotations) '(.*)' which matches the whole filename.
An other simple pattern is .{2}(.*).{4} that removes the first 2 and last 4 character of the filename to get the id
More complicated expressions can be found at http://en.wikipedia.org/wiki/Regular_expression
Regular Expression. When a match of the pattern in the filename is found, this will overrule
the parameter Id for the time series being imported.
A simple pattern is (without quotations) '(.*)' which matches the whole filename
An other simple pattern is .{2}(.*).{4} that removes the first 2 and last 4 character of the filename to get the id
More complicated expressions can be found at http://en.wikipedia.org/wiki/Regular_expression
When the total size of all the files in the specified directory is larger
than the specified number
of MBs an error will be logged and the import of files from this directory will be
skipped
Files that could not be imported due to an error are copied to this folder.
Successfully imported files are moved to this folder.
Files from a suspended import are moved to this folder.
Since 2020.01 Full path to the files that trigger the import. If configured, import will start when all configured files are found. The files are deleted at the end of the import run.
Since 2024.01 File name pattern defining a complete forecast. When as many files as configured match the pattern, they can be imported. If waiting time has passed without all files, the existing files are moved to the failed folder.
Boolean to specify whether or not to delete files after import. This is useful when files are needed for multiple imports.
Since 2020.01. If configured as true, ftp passive mode will be used for the import.
e.g. com.microsoft.jdbc.sqlserver.SQLServerDriver
or com.mysql.jdbc.Driver
Directory with driver jar files and optionally native dlls/sos. When not specified the bin dir and class loader of FEWS is used. When specified the jdbc drive class is loaded from the bin dir, it will not use any jar in the FEWS bin dir.
Connection string to external database.
Test the connection string in db visualiser before using
When specified the connection to the database is first tested before asking the jdbc driver to connect.
Some jdbc drivers are using a very long time out.
(Internet) Url to a time series server. The specified time series parser communicates with this server. Tags should be separated by "%" signs. The following tags can be used in this URL: %TIME_ZERO(dateFormat)% is replaced with the time0 of this import run. The time0 is formatted using the dateFormat that is specified between the brackets. For example %TIME_ZERO(yyyyMMdd)% would be replaced with the year, month and day of the time0. %RELATIVE_TIME_IN_SECONDS(dateFormat,relativeTime)% is replaced with time = (time0 + relativeTime), where time0 is the timeZero of this import run and relativeTime is a time relative to time0 in seconds (can be negative). The time is formatted using the dateFormat that is specified as the first argument between the brackets. For example %RELATIVE_TIME_IN_SECONDS(yyyyMMdd,-18000)% would be replaced with the year, month and day of time = time0 - 18000 seconds.
Optional Url's to define backup import locations.
Since 2014.02. FEWS-11593 e.g *.xml to skip non xml files from server. Only the * and ? wild cards are recognized. Only supported by specific import types and parsers
Since 2017.02 Currently supported only by import from OpenDAP. An example 'http://nomads.ncep.noaa.gov:9090/dods/gfs_0p25_1hr/gfs'yyyyMMdd'/gfs_0p25_1hr_'HH'?'. This will overrule the observation time stored in the file. Some grid formats don't contain the time at all, so for these files the pattern is required. Put the literal parts of the pattern between '
Optional timeout in milliseconds before a connection is deemed as unavailable.
User name and password, required for protected database connections and protected servers.
Optional choice. The period for which data should be read can be specified here. Can be either a relative period (relative to time 0) or a fixed period.
Group that can be used for either a relative or absolute period
Since 2022.02. Only import for periods where there are gaps in the data. Only supported for specific imports, an error will be logged when not supported
Since 2021.01, it is possible to import older forecast data within a given period. Only supported for specific imports, an error will be logged when not supported
Since 2021.01, for each time step within the period, the parser is used to import forecast data.
Skip all forecasts with are older than the specified age. The (external) forecast time and the time0
are used to calculate the age. The import file modification time and current actual time are not used.
Currently the generalCsv and Database import require a table layout description configured by the user. Non-standard imports (plugins) can also required a table layout. See the documentation of the specific import The order of the listed column is used to calculate the column indices when there is no column name information available. Add the columns that are not used as skipped columns so the column indices can be calculated correctly
Option to allow validation of the import files against the template, i.e. xml-schema. If there is no template available, this option wil be ignored.
logErrorsAsWarnings
Since 2014.01. Exceptions occuring in a parser as well as some not parser-specific log messages such as " Import folder ... does not exist" or " Can not connect to..." can be logged as Error or as Warning. Use this option to change it. Default is logErrorsAsWarnings=true. Configure False if you wish clear alert notifications in SystemMonitor and Explorer statusBar
Since 2018.02. Exceptions occuring in a parser as well as some not parser-specific log messages such as " Import folder ... does not exist" or " Can not connect to..." can be logged to file and database. Set this option to true if you do not wish to log them to the database.
When true warnings are logged when time series in the imported files are skipped. By default unmappable time series are silently skipped
If an import file contains some time series that can not be mapped use this option to mark file as failed
When true warnings are logged when locations in the imported files are skipped. By default un mappable locations are silently skipped
Since 2014.02. FEWS-10839. When true warnings are logged when parameters in the imported files are skipped. By default unmappable parameters are silently skipped
Since 2014.02. FEWS-10839. When true warnings are logged when qualifiers in the imported files are skipped. By default unmappable qualifiers are silently skipped
If an import file contains some locations that can not be mapped use this option to mark file as failed
Since 2014.02. FEWS-10078. Maximum number of warnings logged for import. When not specified max log warnings is set to 5. Only applies to general import warnings and not parser specific warnings.
Since 2014.02. FEWS-10839. Log warnings for import file to a specific file called [importFileName].log This file will be placed in either the backupFolder or failedFolder depending on whether they are configured and if the import was successful or not
Available since 2020.02. Default is false. If it is set to true, FileNotFound warnings are only logged in debug mode.
Id of IdMap to be used for parameter, location, qualifier and ensemble member mapping
Since 2023.01. If value is set to true, data will only be imported if the module instance id specified in the config file matches the module instance id of the downloaded data.
Since 2012.02. When the parser provides the standard name the parameter mapping can be done by matching the standard name. The standard name of the parameter in the time series set should be configured in the parameters.xml and the standard name should be provided by the import format. If not an error is logged. When also the maximumSnapDistance is configured no id map is required at all.
Since 2012.02. Optional maximum horizontal snap distance in meters. When the parser provides horizontal location coordinates (x,y) and no locationIds, then the location mapping will be done by matching the horizontal coordinates. The horizontal snap distance is the tolerance used to detect which internal and external horizontal coordinates are the same. Don't forget to configure the geoDatum when the input format does not provide the coordinate system for the locations. When the parser does not provide the coordinates for a time series an error is logged. Note: this option has no effect for grid data. Note 2: it is not possible to import data using horizontal coordinates and using locationIds in the same import, need to define separate import elements for that (one with maximumSnapDistance and one without maximumSnapDistance).
Since 2014.02. Optional maximum vertical snap distance in meters. When the parser provides vertical location coordinates (z) and no locationIds, then the location mapping will be done by matching the vertical coordinates.
The vertical snap distance is the tolerance used to detect which internal and external vertical coordinates are the same. This only works when the input format provides the coordinates of the locations. When the parser does not provide the vertical coordinates for a time series an error is logged.
Note: this option currently only works for importing horizontal layers from netcdf 3D grid data. Note 2: it is not possible to import data with z-coordinates (layers from 3D grids) and data without z-coordinates (2D grids) in the same import, need to define separate import elements for that (one with maximumVerticalSnapDistance and one without maximumVerticalSnapDistance).
Since 2017.02. When true the location is resolved by the layer sigma coordinate
Id of UnitConversions to be used for unit mapping
If this flag is True and the extern unit does not equal the intern unit and there is no unit-mapping available, the time series wil be not imported.
Id of flagConversions to be used for flag mapping
Values that are replaced by NaN during import. NaN is always recognized as missing value.
Values that are replaced by NaN during import. NaN is always recognized as missing value.
All trace values are replaced by 0 during import, before datum conversion
Time zone of the import data. If this is not configured, then GMT is used.
Identification of the cell considered as the first cell of the grid. Enumeration of options includes : NW for upper left, SW for lower left, NE for upper right , SE for lower right. This option should only be used if the NetCDF file is not CF compliant and contains insufficient metadata with info of the grid, its orientation etc.
Type of coordinate system
configuration of the reader specified with importType
Since 2015.02. FEWS-13479. Optional choice for storing import status in the database (table ImportStatus) . If this choice is omitted, import status will be stored in database using the import folder name as dataFeedId.
Id for data feed. Import status will be stored in database using this id
option disableDataFeedInfo indicates that no import status should be stored in database
Convert datum from local datum during import. The conversion wil be done for all parameters which use datum (as configured Parameters.xsd)
Since 2019.02 Skip missing values so they are not overwriting existing values in the database
Since 2021.02 Skip empty text values so they are not imported as missing values.
Since 2021.01. Option to skip time series sets if the location set does not exist. Useful when module or workflow is run in loop with tags being translated. When omitted defaults to 'false'.
Option to allow printing messages to the log-file if the imported values are changed, compared to the previously imported values
ID of the action message that must be logged if any data imported. This message is then used to start up an action as configured in the MasterController config.files (e.g. start a forecast)
Comment to add to the imported values, visible in the time series dialog as tooltip
Sync level used when not specified in the time series sets
Expiry time used when not specified in the time series sets
Optional forecast time relative to the T0 of the import run. All imported external forecast time series will get this forecast time. This overrules any forecast times stored in the imported data itself. All time series with the same forecast time belong to the same forecast.
Skips the first n lines of a file. Error is logged when this option is configured for a binary file.
Since 2017.02: Find the first line that starts with the specified string. All lines before will be skipped by the import. Only supported for line based text imports
Regular expression that specifies which comments should be ignored during import. For instance the regular expression ^(FEWS).* specifies that comments starting with "FEWS" must be ignored. This can also be extended for multiple strings: for instance the regular expression ^(FEWS|DELTARES).* specifies that comments starting with "FEWS" or "DELTARES" must be ignored.
List of exact comments that should be ignored
Since 2021.02. Default is true. When false imported empty comments overwrite existing comments. Import notice: this will also overwrite comments written by users when data is reimported
Since 2014.02. When true, newly imported sample data will overwrite and extend existing sample data, keeping non overwritten data from the same sample. When false the non overwritten sample data from the same sample will be removed. Be careful with changing qualifiers when set to true because changing a qualifier will result in a different time series. This "new" time series will be added to the sample and the "old" will remain, instead of overwriting the old with the new. Also sample properties can not be changed when merging because sample properties should be the same for the entire sample, this also applies to already and newly imported data for the same sample.
Since 2014.02. When true, if part of an imported sample cannot be mapped to a time serie in FEWS the whole sample will be rejected, when false part of a sample can be imported.
Since 2021.02. When true, if for the same time step within a time series multiple values are imported reject sample and add log warning
Since 2022.02. When true, annotations with same location and time but different content (both annotation itself and properties) will overwrite eachother instead of coexist. Default is false.
Since 2018.02. When true, import types that trim requested period to last imported time step will not make an exception (and skip entire period) when last imported time step is after the requested period
Since 2016.01 User defined column separator used for specific serializers like generalCsvSerializer
Since 2016.01 User defined decimal separator used for specific serializers like generalCsvSerializer
Since 2016.02. For formats that supports a custom date time pattern. e.g. yyyy/MM/dd HH:mm:ss
Since 2017.01. Character set of the import file. Specify only when the file contains non western characters.
Date time pattern used for %IMPORT_DATE_TIME%, %FILE_DATE_TIME%
Time used for formatting %IMPORT_DATE_TIME%, %FILE_DATE_TIME%
%IMPORT_DATE_TIME%, %FILE_DATE_TIME%, %FILE_NAME%
%IMPORT_DATE_TIME%, %FILE_DATE_TIME%, %FILE_NAME%
Allows the external forecast start time to begin prior to the begin time of the imported time series.
Overview of supported parsers
SINCE 2018.02. Import files according the http://dms.ec.gc.ca/schema/point-observation/2.1
Since 2017.02 Covadem import that uses the area request parameter
Since 2020.01 Covadem import that uses the track request parameter
Since 2020.01 Covadem import that uses the track2 request parameter
Since 2014.01
Fill in in the table element for every table you want to import.
Blobs are also supported. Specify the parser attribute in every value column element
Imports most csv file by configuring the csv table structure in the table element in this config file. Decimal and column separator are automatically recognized.
First row should contain the column names. Other rows should contain the data
Since 2020.02
Since 2017.02
SINCE 2024.01.
SINCE 2013.02. Import files according the https://fewsdocs.deltares.nl/schemas/version1.0/historicalEvents.xsd. It is no longer required to handle historical events as config. They can now be moved out of the region config and imported like astronomical time series. Don't configure time series sets when using this import.
Since 2016.02. A csv file with on the first line the location id per column
Since 2020.02. Imports the latest grid forecast from a netcdf storage (THREDDS).
Since 2020.02. Imports the latest scalar forecast from a netcdf storage (THREDDS).
Specify url, username and password
Can be used to retrieve data from matroos.
Since 2017.01. Import type for Operational Snow-HyDrological service (OSHD) ascii grid files with additional metadata files. Each file contains an ESRI ascii grid with data. For each grid file there must be an additional metadata file with the same name as the grid file, only different extension (.META).
Since 2017.01. Import from FewsPiServer.
Since 2021.02. Import from FewsPiServer SOAP service. Deprecated, please use pi_server instead if possible.
Since 2015.02
Since 2018.02
Since 2017.01
Can read 1D and 2D spectra see http://swanmodel.sourceforge.net/online_doc/swanuse/node50.htm
Since 2014.01
Since 2014.01
Since 2014.01
Since 2014.01
Since 2017.01: Import WML2 files
Since 2017.01: Import WML2 server
Since 2023.02: Import WML2 data from kiwis service
SINCE 2017.01. Specify a monthly time step for yyyyMM
SINCE 2017.01. eg yyyyMM,
SINCE 2017.01. Number of characters before date time in the file-name / sub-dir name
SINCE 2017.01. Number of characters after date time in the file-name / sub-dir name
SINCE 2017.01 Level of the sub dir. Specify 0 for the folder itself
For some servers no CLIENT_ID or CLIENT_SECRET are required. Instead the auth URL is queried with Basic Authentication using the username and password
Group that can be used for either a relative or absolute period
Period relative to the time 0 of the run. When the start and end time are overrulable the user can specify the download length with the cold state time and forecast
length in the manual forecast dialog.
Start date and time of the (fixed) period for which data should be read. Start is inclusive. This dateTime is in the configured importTimeZone.
End date and time of the (fixed) period for which data should be read. End is inclusive. This dateTime is in the configured importTimeZone.