Key FeaturesWhat Makes IST Unique and how it works in practice.

IST transforms metadata into action. In this section, we explore the powerful features that enable statistical offices to design surveys without programming, validate data automatically, and generate outputs in multiple formats - all within a single integrated platform. You’ll learn how IST supports multiple data collection modes (CAPI, CATI, CAWI), performs real-time and batch validation, and aligns seamlessly with the GSBPM 5.1 model. These features aren’t just technical advantages - they are enablers of faster, more reliable, and standard-compliant statistical production.

Metadata-driven Setup

At the core of IST is its metadata-driven architecture - a design that radically simplifies the creation, deployment, and maintenance of statistical applications. Unlike traditional systems that rely heavily on manual programming and fragmented tool chains, IST uses centralized metadata to dynamically generate key system components for collection, processing, validation, and output table generation.

This metadata-centric approach allows statistical offices to define the logic of their surveys, including variables, rules, classifications, validation checks, and output structures - directly in the IST metadata database. Once configured, IST automatically interprets these definitions and generates fully functional applications for both data entry (desktop or web-based) and post-collection processing.

Key advantages of metadata-driven setup in IST include:
  • Codeless Application Generation
    Applications are generated in real time from metadata definitions, eliminating repetitive and error-prone manual coding.
  • Rapid Adaptability
    Changes to survey design or structure can be made instantly via metadata updates, making IST highly responsive to evolving statistical needs.
  • Cross-domain Standardization
    Since the same metadata structure is used across surveys, it enforces methodological consistency and supports alignment with the GSBPM model.
  • Empowerment of Statisticians
    With proper training, subject-matter statisticians can manage their own application logic without relying on programmers, enhancing collaboration between IT and statistical divisions.
  • Scalability and Reusability
    Metadata templates can be reused across different surveys, improving efficiency and ensuring coherence in questionnaire logic, validation rules, and reporting formats.

In practice, IST’s metadata layer forms the backbone of the system and connects all functional areas: from survey design to data validation, reporting, and integration with external tools. It is what enables IST to remain light, flexible, and truly sustainable in diverse NSO environments.

Survey Design & Data Collection

IST enables comprehensive survey design and data collection through an intuitive, metadata-driven interface that requires no programming. By defining variables, structures, validation rules, and logical flows in the metadata tables, statisticians and IT staff can create fully functional data entry applications that are consistent, standardized, and rapidly deployable.

Design Once, Use Across Modes

IST supports the design of a single questionnaire that can be used across multiple modes of data collection:
  • CAPI (Computer-Assisted Personal Interviewing)
  • CATI (Computer-Assisted Telephone Interviewing)
  • CAWI (Computer-Assisted Web Interviewing)
  • PAPI (Paper-based entry, optional via Desktop)
  • IST Desktop, IST Web and IST Android environments
This unified design approach streamlines resource use, reduces duplication, and ensures data comparability regardless of the collection channel.

Real-Time Application Generation

Once a survey’s metadata is defined in IST, the system dynamically generates the data entry interface in real time, supporting:
  • Skip logic, assign logic, custom messages, enable and visibility logic, and validation rules
  • Input constraints, combo boxes, autocompleted text boxes, radio buttons, check boxes and various format of DataGrid views
  • Multilingual field labels and tooltips
  • Possibility to select fields for automatic encryption in data database to enhance security
  • Logical controls and computed (virtual) fields
This ensures that even large and complex questionnaires can be deployed quickly without coding.
IST interpreter – real-time form generation from metadata IST interpreter – real-time form generation from metadata

Built-in Validation and Logical Control

IST ensures data integrity at the point of entry through:
  • Field-level validation (type, length, range, value list)
  • Cross-field consistency checks
  • Lookup tables and reference datasets
  • Batch logical controls for post-collection error checking

These mechanisms are fully controlled by metadata definitions in tables like _ISTTablesColumns and _ISTRulesLogicalControl, promoting both flexibility and standardization.

Integrated Survey Workflow

Integrated Tools Supporting Reliable Data Collection Integrated Tools Supporting Reliable Data Collection
From design to live collection, IST supports the entire data collection lifecycle:
  1. Survey configuration in metadata tables
  2. Real-time application deployment (Desktop, Web, Android)
  3. Logical validation at entry
  4. Batch controls and corrections
  5. Preparation for analysis or output table generation

CAWI and Remote Collection

For web-based data collection, IST includes a module that generates a full CAWI application as an .ASPX site, supporting:
  • Secure respondent login
  • Customized user interface per survey
  • Preloading data for registered respondents
  • Full compliance with statistical collection workflows
Example CAWI web application interface Example CAWI web application interface

Data Processing & Validation

IST ensures robust, transparent, and efficient data processing and validation, enabling national statistical offices to maintain high-quality, GSBPM-compliant data throughout the statistical production lifecycle.

Metadata-Driven Logical Control and Batch Validation

IST uses metadata rules to perform automated validation on both the record and batch level. Logical controls and validation rules are defined in metadata tables such as _ISTRulesLogicalControl and _ISTRulesDataValidation, allowing:

  • Checks during data entry (real-time validation)
  • Batch validation across the entire dataset, entire database (cross check with data from other tables or other tables in different databases) and across any database on NSO system
  • Triggering of error flags and customized messages
  • Conditional automatic corrections
Sample of _ISTRulesDataValidation table showing how validation rules are defined Sample of _ISTRulesDataValidation table showing how validation rules are defined

Real-Time Validation at Point of Entry

At the point of data entry, IST executes logical validation rules defined for each field. Rules are triggered:
  • Automatically, when a user exits a field (onValidation event). All rules which contain field will trigger automatically
  • Upon clicking the LC (Logical Control) button
  • Automatically, On Save, ensuring that no invalid record persisted silently
Validation is color-coded (e.g. red for heavy, yellow for light errors) to guide users in real time.
IST form showing highlighted fields with validation errors (simulated CAWI form) IST form showing highlighted fields with validation errors (simulated CAWI form)

Batch Validation Module

The Batch Data Validation module allows for:
  • Full validation of all records based on rules defined in _ISTRulesDataValidation
  • Execution of SQL UPDATE commands to flag erroneous data (errNumber bit field)
  • Detailed summary of rule violations per table, time point, and user
Batch processing improves performance by validating one rule at a time, preventing server deadlocks.
Batch validation execution flow (rule application and error logging) Batch validation execution flow (rule application and error logging)

Automatic Correction Capabilities

IST supports automated correction of data based on predefined SQL actions stored in the metadata (e.g. errAction field in _ISTRulesDataValidation). This allows:
  • Fixing of known systemic errors
  • Suggested corrections based on prior patterns
  • Reduction in manual editing workload
Corrections can be configured to run automatically or as user-confirmed steps.
Example SQL error correction logic from metadata Example SQL error correction logic from metadata

Error Reporting and Logs

All validation and correction actions are tracked in system logs, ensuring full transparency and auditability:
  • _ISTLogBatchLC – batch logical control execution results log
  • _ISTVariablesChange – variables change (edit, input) log
  • _ISTLogDelete – records detailed information about the deletion, including the survey name, the table from which the record was deleted, the user account that performed the deletion, as well as the date, time, and key of the row that was removedDeleted
This infrastructure supports quality assurance processes and downstream output table generation.
IST Log Tables Overview (Data Flow from Input to Logs) IST Log Tables Overview (Data Flow from Input to Logs)

Output Table Generation

IST enables flexible and powerful output table generation using SQL-based metadata definitions, allowing statistical offices to produce custom tables, reports, and dissemination files in Excel, XML, JSON, and CSV formats. This function is tightly integrated with the system’s metadata-driven architecture and supports both static reports and dynamic data outputs.

All reports in IST are defined through the _ISTReportsProcedures metadata table. Each report entry includes:
  • Report title and sequence number
  • SQL query or stored procedure for data extraction
  • Output format (e.g. Excel, XML, JSON)
  • Optional macros for post-processing
  • Visibility settings based on user roles
Sample of _ISTReportsProcedures metadata table with report definition fields Sample of _ISTReportsProcedures metadata table with report definition fields
When a user initiates a report from the IST interface:
  1. The IST interpreter loads the corresponding metadata (SQL query + header file + optional macro)
  2. Executes the SQL code against the connected MS SQL database
  3. Exports the results into a file of the desired format (Excel/XML/JSON/CSV)
  4. Applies macro (if present) and logs the action in metadata tables
Report execution flow (metadata to final file generation) Report Execution Flow (metadata to final file generation)
IST supports the following formats for output reports:
  • Excel (.xls, .xlsx, .xlsm): Often used with headers and macros for formatting
  • XML and JSON: Structured for data exchange and integration
  • CSV/TXT: For simple exports or raw data sharing
  • ODS: OpenOffice Calc (OpenOffice spreadsheet for those who do not use MS Office Excel)
Each format may include custom headers based on dynamic conditions, and custom macros defined by the developer. Some macros (like ISTmPivotChart.xlsm for pivot table generation) are already integrated in IST interpreter.
Macro/header selection logic for Excel outputs Macro/header selection logic for Excel outputs
  • Parameterized Reports: Queries can accept parameters (like selected month/year), using =GGG or =MMM to auto-insert time values or custom parameters, defined by developer to insert value given by user (user-defined).
  • Conditional Visibility: Certain reports can be restricted by user role or condition (e.g., #{VisibleFalseIf=(select dbo.f someFunction)})
  • Power BI Integration Optional links to online reports are stored in the PBLink field.
  • Integration with various programs: Any kind of file or link that should be opened. Default program on user’s device will be used (*.accdb, *.avi, *.docx, etc.)
Conditional visibility and parameterization in report metadata Conditional visibility and parameterization in report metadata
Each report generation is recorded in the system:
  • _ISTLogProcessUsage: every process used in IST by user log
  • _ISTLogExcelXMLJSON: Stores the query behind each generated output (each dataset exported from IST log)
  • _ISTSavedAdvancedSearch Stores the query behind each generated output in Advanced Search (script for each dataset exported from Advanced Search log)
These logs ensure full traceability and can support GSBPM 5.1 Alignment by documenting dissemination steps.
Logging of report usage and output file generation Logging of report usage and output file generation

GSBPM 5.1 Alignment

GSBPM 5.1 Alignment

The Integrated Statistical Tool (IST) has been developed in full alignment with the Generic Statistical Business Process Model (GSBPM) version 5.1, ensuring that national statistical offices (NSOs) can carry out modern, standardized statistical production using internationally recognized frameworks.

Generic Statistical Business Process Model (GSBPM) Generic Statistical Business Process Model (GSBPM)

Coverage of All GSBPM Phases

IST Coverage of GSBPM IST Coverage of GSBPM v5.1

IST supports a wide portion of the GSBPM by providing tools that align with nearly all phases - from planning and collection to processing, analysis, and output preparation. Its metadata-driven architecture ensures that workflows are transparent, repeatable, and adaptable, meeting both institutional and international quality standards.

The IST system supports almost the full cycle of statistical production:
  • In the Specify Needs phase, IST helps users assess existing statistical concepts, variables, and evaluate whether current sources of data meet user requirements. It facilitates the identification of gaps in data availability and helps structure metadata around national concepts and classifications.
  • During the Design phase, IST provides the tools necessary to model variables, questionnaires and validation rules, through metadata. Users can define the design of survey instruments, processing logic, and analysis outputs without programming, enabling methodological consistency and efficient reuse.
  • The Build phase is fully operationalized in IST through real-time application generation. The system interprets metadata to build survey forms, logical control rules, validation processes, and reporting procedures - allowing rapid prototyping and deployment across multiple modes (CATI, CAPI, CAWI, Desktop).
  • In the Collect phase, IST supports multiple data collection modes through its integrated web, desktop, and Android applications. These platforms provide multilingual interfaces, real-time validation, and user role segmentation, enabling quality data capture in both field and centralized environments.
  • The Process phase is deeply embedded in IST’s functionality. It handles data integration, editing, logical validation, imputation, batch controls, and finalization of data files - ensuring robust and auditable transformations from raw to cleaned data.
  • IST contributes to the Analyse phase by enabling users to generate automated and validated output tables, apply business rules, and prepare data for dissemination. Parameterized SQL procedures, metadata-defined output formats, and macro-enabled Excel reports all streamline the production of analytical results.
  • While IST includes basic dissemination support (e.g. table exports, structured outputs (outputs can be published in Excel, XML, JSON, or CSV, or ODS with structured metadata and headers), and update systems), and evaluation support (through automatic calculations of some Quality Performance Indicators (QPIs)) the full Disseminate and Evaluate phases can be further enhanced by integrating IST with complementary tools.

Achieving Full GSBPM Coverage through Integration

IST is designed to work seamlessly alongside existing institutional systems. When NSOs combine IST with tools such as Mail Servers, MS Office/OpenOffice, Intranet portals, public-facing Web portals, and specialized statistical software (e.g. SPSS, R, τ-Argus, or dissemination platforms like .Stat Suite), they can achieve nearly complete coverage of the GSBPM 5.1 model.

This interoperability model empowers institutions to:
  • Automate routine production workflows in IST
  • Perform advanced analytics or disclosure control using R/SPSS/τ-Argus
  • Disseminate products through portals and dashboards
  • Manage evaluations and documentation via internal systems
GSBPM Mapping with IST and Complementary Tools GSBPM Mapping with IST and Complementary Tools

Data Flow

IST Data Flow Data Flow

Next segment: