Documents
  • Invariant Documents
  • Platform
    • Data Platform
      • Install Overview
      • System Requirement
      • Software Requirement
      • Prepare the Environment
      • Installing Ambari Server
      • Setup Ambari Server
      • Start Ambari Server
      • Single Node Install
      • Multi-Node Cluster Install
      • Cluster Install from Ambari
      • Run and monitor HDFS
    • Apache Hadoop
      • Compatible Hadoop Versions
      • HDFS
        • HDFS Architecture
        • Name Node
        • Data Node
        • File Organization
        • Storage Format
          • ORC
          • Parquet
        • Schema Design
      • Hive
        • Data Organization
        • Data Types
        • Data Definition
        • Data Manipulation
          • CRUD Statement
            • Views, Indexes, Temporary Tables
        • Cost-based SQL Optimization
        • Subqueries
        • Common Table Expression
        • Transactions
        • SerDe
          • XML
          • JSON
        • UDF
      • Oozie
      • Sqoop
        • Commands
        • Import
      • YARN
        • Overview
        • Accessing YARN Logs
    • Apache Kafka
      • Compatible Kafka Versions
      • Installation
    • Elasticsearch
      • Compatible Elasticsearch Versions
      • Installation
  • Discovery
    • Introduction
      • Release Notes
    • Methodology
    • Discovery Pipeline
      • Installation
      • DB Event Listener
      • Pipeline Configuration
      • Error Handling
      • Security
    • Inventory Manager
      • Installation
      • Metadata Management
      • Column Mapping
      • Service Configuration
      • Metadata Configuration
      • Metadata Changes and Versioning
        • Generating Artifacts
      • Reconciliation, Merging Current View
        • Running daily reconciliation and merge
      • Data Inventory Reports
    • Schema Registry
  • Process Insight
    • Process Insight
      • Overview
    • Process Pipeline
      • Data Ingestion
      • Data Storage
    • Process Dashboards
      • Panels
      • Templating
      • Alerts
        • Rules
        • Notifications
  • Content Insight
    • Content Insight
      • Release Notes
      • Configuration
      • Content Indexing Pipeline
    • Management API
    • Query DSL
    • Configuration
  • Document Flow
    • Overview
  • Polyglot Data Manager
    • Polyglot Data Manager
      • Release Notes
    • Data Store
      • Concepts
      • Sharding
    • Shippers
      • Filerelay Container
    • Processors
    • Search
    • User Interface
  • Operational Insight
    • Operational Insight
      • Release Notes
    • Data Store
      • Concepts
      • Sharding
    • Shippers
      • Filerelay Container
    • Processors
    • Search
    • User Interface
  • Data Science
    • Data Science Notebook
      • Setup JupyterLab
      • Configuration
        • Configuration Settings
        • Libraries
    • Spark DataHub
      • Concepts
      • Cluster Setup
      • Spark with YARN
      • PySpark Setup
        • DataFrame API
      • Reference
  • Product Roadmap
    • Roadmap
  • TIPS
    • Service Troubleshooting
    • Service Startup Errors
    • Debugging YARN Applications
      • YARN CLI
    • Hadoop Credentials
    • Sqoop Troubleshooting
    • Log4j Vulnerability Fix
Powered by GitBook
On this page
  1. Discovery
  2. Inventory Manager

Service Configuration

The service configuration is maintained in the file 'application.yaml' in the config folder

# Inventory Manager Config

projectrootdir: ./
datasource: datasources.yml
datasourcetables: datasource_tables.yml
targetDataStore: target_datastore.yml
outputFolder: /opt/invmgr/output

server:
  applicationConnectors:
    - type: http
      port: 8090
    - type: https
      port: 8898
      keyStorePath: .keystore
      keyStorePassword: <password>
      validateCerts: false
      validatePeers: false
  adminConnectors:
    - type: http
      port: 8081
    - type: https
      port: 8889
      keyStorePath: .keystore
      keyStorePassword: <password>
      validateCerts: false

  #If want to use our own exception mappers
  registerDefaultExceptionMappers: false

  #Disable the requestLog on console
  requestLog:
    appenders:
      - type: file
        currentLogFilename: /var/log/pipeline/request.log
        archivedLogFilenamePattern: /var/log/pipeline/request-%i.log.gz
        threshold: ALL
        maxFileSize: 10MB
        archivedFileCount: 5

login: appuser
password: <password>
inventorySchedule: inventory-schedule.yml

database:
  driverClass: org.postgresql.Driver
  user: dbuser
  password: <password>
  url: jdbc:postgresql://localhost:5432/invmgrdb

apiURL: https://
apiKey: <API Key>


logging:
  level: INFO
  loggers:
    io.invariant:
      level: DEBUG
      additive: false
      appenders:
        - type: file
          threshold: ALL
          maxFileSize: 20MB
          currentLogFilename: /var/log/pipeline/invariant-invmgr/invmgr.log
          archivedLogFilenamePattern: /var/log/pipeline/invariant-invmgr/invmgr-%i.log.gz
          archivedFileCount: 5
    org.hibernate:
      level: ERROR
      additive: false
      appenders:
        - type: file
          currentLogFilename: /var/log/pipeline/invariant-invmgr/sql.log
          archivedLogFilenamePattern: /var/log/pipeline/invariant-invmgr/sql-%d.log.gz
          archivedFileCount: 5

The application connector ports are used to bind the service and provide the endpoint for clients.

The "projectrootdir" specifies the location to locate the other configuration files. The other files should be in that folder or in a relative path to the root directory.

projectrootdir: ./
datasource: datasources.yml
datasourceTables: datasource_tables.yml
targetDataStore: target_datastore.yml
outputFolder: /opt/invmgr/output

The "datasource" yaml file contains the connection information for the source databases.

The "datasource tables" yaml file contains the list of tables with the audit columns and other metadata required for processing.

The "target datastore" yaml file is used to define the configuration settings for the target store which is usually a Hive store.

The "output folder" points to the directory where the generated artifacts are written out.

PreviousColumn MappingNextMetadata Configuration

Last updated 5 years ago