OvalEdge Releases


OvalEdge Release 6.1 introduces a wide array of advanced features and improvements, all aimed at elevating your data management experience. Below is a comprehensive overview of the enhancements included in this release:

  1. Administrative Roles Enhancements: Admins now have the ability to assign specific roles and permissions to users at the connector level, providing better control over access and data governance.
  2. New Licensing Model: The introduction of the Viewer and Author Licenses offers more flexibility in managing user roles and access rights.
  3. Improved Data Quality Rules (DQR): The DQR functionality has been updated and enhanced, allowing users to create and manage data quality rules, ensuring better data governance practices.
  4. Enhanced Service Request Template: The Service Request Template feature has been significantly improved, supporting multiple external service integrations and enabling users to set external service systems as approvers in the approval workflow, streamlining the service request process.
  5. Customizable Notification Templates: Users now have the option to customize notification messages received via Jira, ServiceNow, Azure DevOps, and other mediums, tailoring communication to their specific needs.
  6. Deep Analysis Tool: A powerful addition to the platform that fulfills the need for a more in-depth and comprehensive examination of data changes within a specific schema. It extends the functionality of the Compare Profile and Compare Schema tools.
  7. Connector Health: OvalEdge now provides a Connector Health feature, indicating connector status based on overall performance. Users can monitor factors such as data transfer success rate, response times, error rates, and data handling efficiency for smooth data pipelines and workflows.
  8. Enhanced Global Search: The Global Search feature has been enhanced to offer global bookmarks for quick access to different environments. Users can now search for terms using classifications, allowing them to find specific terms categorized under different classifications. Additionally, customization settings for search results have been improved, enabling users to fine-tune their search preferences.
  9. Bug Fixes and Improvements: OvalEdge Release 6.1 addresses various bugs and issues reported by users, ensuring a smoother and more reliable experience for all.

Release Details

Release Type   

Release Version


<Release. Build Number. Release Stamp>

Build Date

Minor Release


Release6.1.6100.4ac0fe3 03 July 2023

What’s New

Administration Roles

OvalEdge Release 6.1 brings significant changes to the Administration roles, improving overall efficiency and reducing reliance on the super admin for all tasks. In the previous version, the super admin user had complete control over all tasks in OE, while the crawler admin was responsible for crawling data sources. However, this setup caused delays in task approval and execution. With the new upgrade, every connector can now have its own set of admins (Author Users) configured, leading to a more streamlined process.

At the connector level, the following administrators are configured:

Crawler Administrator:

  • Responsible for adding a connection.
  • Nominates Integration and Security & Governance Administrator during connection establishment.

Integration Administrator:

  • Manages the configuration settings for the profile and crawler settings of the connector.
  • Has the authority to delete the connector, schema, or data objects associated with it.

Security and Governance Administrator: 

On a connector level, this admin has the following privileges:

  • Sets permissions for roles on all data objects associated with the specific connector.
  • Updates governance roles on data objects linked to the connector.
  • Creates custom fields on all data objects associated with the connector.
  • Manages the creation of Service Request templates for the connector.
  • Has the authority to create and manage domains.
  • Configures categories, sub-categories, and classifications for each domain.
  • Assigns permissions to users or roles authorized to access the domain and its associated terms.

These enhancements provide a more granular level of control and empower admins to handle specific tasks efficiently within their designated connectors. By distributing responsibilities among different administrators, OvalEdge Release 6.1 ensures smoother task execution and faster approvals, contributing to an enhanced data management experience.

For more information on the New Administration Roles, please refer to the OvalEdge New Administrative Capabilities document.

New Licensing Model

In OvalEdge Release 6.1, we are excited to introduce new licenses that offer enhanced access to data objects, catering to different user needs. The three licenses available are "Viewer" and "Author," each designed to provide varying levels of privileges for effective data management.

Viewer License:

  • Metadata: Provides read access to the metadata of data objects.

  • Data: Can be granted Data No Access or Data Preview.

  • Ideal for users who need to view data objects without making any modifications.

Author License:

  • Offers the highest level of privileges for managing data objects that can be assigned to Governance stakeholders such as Owners, Stewards, and Custodians.

  • Author users can actively participate in approval workflows, ensuring seamless data governance.

  • They have permission to view and modify both metadata and data within data objects.

These new licenses provide a more granular and tailored approach to data access, allowing users to efficiently manage data according to their specific roles and responsibilities.

For more information on the New Licensing Model, please refer to the Release6.1 License Types document.

Improved Data Quality Rules (DQR)

The latest updates to the Data Quality Rules (DQR) module are aimed at improving existing functionalities and introducing new features for an enhanced data management experience.

Add Rule Pop-up in Data Quality Rules:

  • The rule-creation process has been enhanced for greater intuitiveness, allowing users to access various options such as scheduling, notifications, and success percentage, thereby streamlining the rule-setup process.

Data Quality Objects:

  • Predefined Data Quality Rules are now available for different data objects, including tables, table columns, and reports.

  • Users can schedule specific DQRs, view executed reports, and enable/disable rules individually or collectively.

  • Notifications can be enabled, and service requests can be automated through dedicated radio buttons in case of a DQR failure.

Control Center:

  • A new Control Center has been introduced to provide better monitoring and management of executed data quality rules and their execution status.

Data Anomaly Detection:

  • The new feature enables the detection of anomalies in data assets, notifying users of potential data quality issues.

Data Quality Index (DQI):

  • OvalEdge now offers better insights to users about a DQR that was executed, with new parameters such as DQR score, Service Request Score (SR Score), and Child Score.

    • DQR Score is calculated based on the outcome of the DQ Rules executed on the specific data object.

    • SR Score is calculated based on the weighting of open service requests associated with the data object.

    • Child Score takes into consideration the weightage of child objects, such as columns being children of a table.

These updates significantly enhance the DQR feature capabilities, providing users with robust data governance tools to maintain data quality and consistency across their organization. 

For more information on the Improved Data Quality Rules (DQR), please refer to the following documents.

Enhanced Service Request Template

OvalEdge Release 6.1 brings exciting updates to the Service Desk Template, Fulfillment Mode, and Approval Workflows, providing users with more flexibility and customization options for their service requests.

Service Desk Template, Fulfillment Mode:

  • Users can now choose between Automated and Manual Fulfillment modes, tailoring the request processing to their preferences.

  • In Manual Mode, the final approver has the option to fulfill the request after the approval process is completed, allowing for more customized handling.

  • In Automated Mode, the service request is fulfilled automatically after approval, presenting the user with a list of automatic fulfillment options to choose from.

Customize Template Fields - Custom Options:

  • When adding fields to a template, users can configure various settings using JSON format for each stage (pre-creation, creation, fulfillment, approval, and final approval).

  • Visibility settings determine when and where the field should be displayed in the template.

  • Editability settings control whether users can edit the field details at different stages.

  • Mandatory status settings determine if a field is essential for raising a service request.

Support for Multiple External Integration Systems:

  • The service request template now supports multiple external integrations such as Jira, ServiceNow, and Azure DevOps, offering flexibility to switch between different external tools from a single template.

  • Each integration can be easily configured and managed through a user-friendly interface with "active" or "inactive" status options.

  • Template fields can be mapped to corresponding external service tools.

Approval Workflows:

  • The Approval Workflow now allows external tools (e.g., Jira, ServiceNow, Azure DevOps) as approvers for service requests, expanding the options for efficient approval processes.

  • The SLA (Service Level Agreement) checkbox triggers advanced notifications to approvers, ensuring timely processing and improved accountability.

  • Users can set specific timeframes for the approval process, and approvers must meet designated response times to prevent delays or overlooked requests.

These enhancements provide users with powerful tools to customize service request handling, streamline approval workflows, and seamlessly integrate external systems for a more efficient and productive experience. 

For more information on the Enhanced Service Request Template, please refer to the following documents.

Customizable Notification Templates

OvalEdge Release 6.1 introduces the all-new Notification Templatization feature! With this exciting addition, admins have unprecedented control over what notifications users receive.

With our user-friendly interface, admin users can effortlessly tailor notification messages by simply dragging and dropping variables that correspond to specific events or features. The process is intuitive and streamlined, allowing admins to create personalized messages that resonate with users.

The Notification Templatization feature supports three different mediums for notifications: Inbox, Email, and Slack. Each medium has its own dedicated tab, making it convenient for admins to customize messages for different communication channels.

Get rid of generic and standardized notifications! OvalEdge Release 6.1 puts customization power in your hands, ensuring that your users receive relevant and engaging messages through their preferred channels. Upgrade today and unlock the full potential of personalized notifications with our innovative Notification Templatization feature.

Navigation: OvalEdge Application > Administration > System Settings > Notifications Templates.

Deep Analysis Tool

The Deep Analysis Tool in OvalEdge provides invaluable insights by identifying impacted data objects across different data systems when a business transaction occurs in one or more enterprise applications. This analysis is particularly useful for understanding the flow of information specific to a business transaction or use case, leveraging profiling metrics of data objects captured in a controlled setting.

Follow these steps to identify transaction-related data objects:

  • Restrict Concerned Applications:
    Ensure that the concerned applications are restricted for generic use, allowing focused analysis.
  • Baseline Profiling:
    Perform profiling of the connectors of the data systems for the applications. This establishes the baseline metrics before the business transaction.
  • Perform Business Interaction:
    A user performs the desired business interaction on the enterprise application(s).
  • After Transaction Profiling:
    Conduct profiling of the connectors of the data systems again. This generates profiling metrics after the completion of the business transaction.
  • Comparison and Analysis:
    Compare the before and after profiling metric sets. Any data objects whose profiling metrics have changed are potential candidates related to the business transaction.

The information obtained through the Deep Analysis Tool is invaluable for understanding the impact of business transactions in complex enterprise environments with multiple integrated applications.

Note: Prior to utilizing the Deep Analysis feature, it is essential for the schema under consideration to undergo profiling at least once. This prerequisite ensures the availability of necessary profiling data, facilitating accurate comparison of metrics and yielding reliable and meaningful analysis results.

Configuration: OvalEdge Application > Administration > System Settings > Others > Search for “enable. deep analysis” key > Set the value to “true” 

For more information on the Deep Analysis Tool use case, please refer to the Deep Analysis Tool Use Case document.

Connector Health

Connector Health in OvalEdge refers to the status based on the overall performance of a connector within an application or data integration platform. This assessment takes into account various factors, including the success rate of data transfers, response times, error rates, and the connector's ability to handle large data volumes efficiently. Monitoring connector health is vital for maintaining smooth data pipelines and workflows.

Key Benefits of Monitoring Connector Health:

  1. Issue Identification and Resolution: Monitoring Connector Health helps identify and address any issues or bottlenecks that could potentially impact data flow and data integrity. Early detection allows for timely resolution and minimizes disruptions.
  2. Proactive Management and Optimization: Keeping an eye on connector health enables proactive management and optimization of data integration processes. By identifying areas for improvement, teams can fine-tune the connectors to achieve optimal performance.
  3. Reliable Data Transfer: Ensuring the health of connectors guarantees reliable and accurate data transfer between systems. This is critical for maintaining data consistency and data quality throughout the integration process.

In the OvalEdge application, the Connectors Health feature displays a clear indicator of the connection status for each connector. A green icon indicates an active connection, while a red icon indicates an inactive connection. This visual representation provides users with a quick and easy way to monitor the health of their connectors, facilitating efficient data management and data integration within the platform.

For more information on Connector Health, please refer to the Connector Health document.

Enhanced Global Search

The enhanced Global Search feature is designed to provide a more efficient and personalized search experience for users.

  • Global Bookmarks for Easy Environment Switching:
    Users now have access to global bookmarks at the top header of the application platform. These bookmarks offer quick access to different environments and cannot be modified, ensuring seamless switching between important environments.
  • Search by Classifications:
    Users can now search for terms using classifications. Simply entering the name of a classification in the search box will yield all the terms classified under that category. This streamlined approach makes it easier for users to find specific terms, especially when dealing with multiple terms classified under different categories.
  • Enhanced Customization Settings for Search Results:
    OvalEdge now offers improved customization settings for search result calculations. Users can tailor the display of relevant search results to match their specific requirements. This includes the ability to adjust settings for including or excluding synonym and popularity scores, enabling users to fine-tune search results according to their preferences.
    • If both Synonym and Popularity Scores are enabled: In this case, when the configuration settings for both ‘globalsearch.score.use.synonym’ and ‘globalsearch.score.use.popularity’ is set to true, the search results are ranked using the formula (Elasticsearch Score + Synonym Score) multiplied by Popularity Score. This ensures that search results consider Elasticsearch Score, Synonym Score, and Popularity Score, resulting in more accurate and relevant rankings.
    • If only Synonym Score is enabled (Popularity Score is disabled): When the configuration setting ‘globalsearch.score.use.synonym’ is set to ‘true’ and ‘globalsearch.score.use.popularity’ is set to ‘false’, the search results are scored by combining Elasticsearch Score and Synonym Score. The popularity Score is excluded from the calculation. This configuration allows for a focus on synonym matches while disregarding the popularity of search results.
    • If only Popularity Score is enabled (Synonym Score is disabled): If ‘globalsearch.score.use.synonym’ is set to false and ‘globalsearch.score.use.popularity’ is set to true, the search results are ranked based on Elasticsearch Score multiplied by Popularity Score. Synonym Score is not considered in this case. This configuration emphasizes the popularity of search results, ensuring that more frequently accessed or relevant items are prioritized.
    • If both Synonym and Popularity Scores are disabled: When both ‘globalsearch.score.use.synonym’ and ‘globalsearch.score.use.popularity’ is set to false, the search results are solely based on the Elasticsearch Score. Synonym Score and Popularity Score are not taken into account. This configuration simplifies the scoring system, focusing solely on the relevance determined by the Elasticsearch Score.
  • Advanced Filters:
    The existing Advanced Filters in the Global Search have been enhanced to include additional conditions, providing users with more refined search options for precise results.
    • Equal to condition
      The "Equal to" condition in the Advanced Search data filter is used to specify that the search results should include only those items that match the exact value provided.
      Example: Let's say we have a dataset of employees with a column named "Department." We want to find all employees in the Sales department. To do this, we can use the Advanced Search data filter and set the condition as follows: Field: Department Operator: Equal to Value: Sales By applying this filter, the search results will display only those employees whose Department value is exactly "Sales." This allows us to narrow down the search and retrieve specific data matching the given condition.
    • Starts with condition
      The "Starts With" condition in the Advanced Search data filter retrieves items with a value beginning with a specific set of characters.
      Example: Let's assume we have a dataset of customers with a column named "Name." We want to find all customers whose names start with the letter "A". To accomplish this, we can set up the Advanced Search data filter as follows: Field: Name Operator: Starts With Value: A When we apply this filter, the search results will include only those customers whose names start with the letter "A". This condition allows us to retrieve data that matches the specified starting characters, providing a more targeted search.
    • End with condition
      The "Ends With" condition in the Advanced Search data filter retrieves items with a value ending with a specific set of characters.
      Example: Suppose we have a dataset of products with a column named "SKU" (Stock Keeping Unit). We want to find all products whose SKU ends with the number "123". To achieve this, we can configure the Advanced Search data filter as follows: Field: SKU Operator: Ends With Value: 123 Upon applying this filter, the search results will include only those products whose SKUs end with the number "123". The condition allows us to retrieve data that matches the specified ending characters, enabling us to refine the search based on specific criteria.

With these enhancements, the Global Search feature in OvalEdge is now more powerful and user-friendly, allowing users to find the information they need quickly and accurately. 

For more information on Configuring Global Search Results, please refer to the Configuring Global Search Results Calculations document.

New Configurations

The latest release introduces a set of new configurations that provide users with even greater control over the application's behavior. The newly added configurations are listed in the following Table.




Set the number of child tags to be displayed on the respective Tag summary page.


The default value is set to 100.

Enter the value in the field provided.


Set the pagination limit for displaying the number of health history records in the Connector Heath pop-up.


The default value is set to 20.

Enter the value in the field provided.


Configure the weightage of the Data Quality Rule score to calculate the data quality score on data objects.

Note: Data Quality Score is the weighted average of three parameters: Data Quality Rule Score, Service Request Score, and Child Score. The total weightage of all three parameters should add up to 100%.


The default value is set to 50.

Enter the ratio percentage in the field provided.


Configure the weightage for the service request score to calculate the data quality score on data objects.

Note: Data Quality Score is the weighted average of three parameters: Data Quality Rule Score, Service Request Score, and Child Score. The total weightage of all three parameters should add up to 100%.


The default value is set to 25.

Enter the ratio percentage in the field provided.


Configure the weightage of the service request score to calculate the data quality score

Note: Data Quality Score is the weighted average of three parameters: Data Quality Rule Score, Service Request Score, and Child Score. The total weightage of all three parameters should add up to 100%.


The default value is set to 25.

Enter the ratio percentage in the field provided.


Set the minimum and maximum range for data changes to identify any anomalous data objects. Any data objects that lie outside of this range will be considered abnormal or anomalies.


The default value is set to 10-50.

Enter the value in the field provided to configure the new maximum limit.


Set the threshold percent above or below which anomaly will be generated for the rate of change in the data series. Any data objects that lie outside of this threshold will be considered abnormal or anomalies.


The default value is set to 50.

Enter the value in the field provided to configure the new maximum limit."

oe.global search.searchorder

Control the display of tabs in the Global Search by enabling or disabling them. This applies to both the tabs across the top of the page and the filter left panel.


Enter the tab names in the field provided.

  • All-ALL
  • Databases - schema
  • Tables - oetable
  • Tags - oetag
  • Table Columns - one column
  • Files - oefile
  • File Columns - oefilecolumn
  • Reports - oechart
  • Report Columns - chartchild
  • Codes - oequery
  • Business Glossary - glossary
  • Data Stories - oestory
  • Projects - project
  • Service Requests - servicedesk


Configure how the Welcome story should be displayed on the Home Page by default.


The default value is False.

  • If set to True, the welcome story will be displayed to its maximum size.
  • If set to False, the welcome story will be displayed within a limited size with 'See More' and 'See Less' options to expand/collapse the content.


Define a standard message in the azure devOps > stores > description field.


Enter the message to be displayed in the Description Field.


Set the maximum number of data objects that can be associated with a DQ Rule.


The default value is set to 1000.


Configure the maximum number of failed results of the DQR (Data Quality Rule) to be displayed in the Control Center Tab.


The default value is set to 50.

Enter the value in the field provided to configure the new maximum limit.


Configure to display either Published Terms or Terms in both Published & Draft status in the drop-down options when associating terms to data objects.


The default value is set to False.

  • If set to True, only published terms will be displayed in the drop-down when adding terms to data objects.
  • If set to False, terms in both published and draft status are displayed in the drop-down when adding terms to data objects.


Assign the 'User and Role Creator' privileges to Roles.


The default value is OE_ADMIN.

Click on the field to select any role from the drop-down.


Configure to add a role as a task assignee who can reassign tasks to project members.


The default value is set to PROJECT_MEMBERS.

Click on the field to select any role from the options PROJECT_OWNER, CURRENT_ASSIGNEE, and PROJECT_MEMBERS.


To check the access permissions a user has on data objects before assigning a task to any member.


The default value is empty.

Click on the parameter field to check access permissions on a data object. The available access permissions are as follows: META_READ, META_WRITE, DATA_PREVIEW, DATA_READ, DATA_WRITE, GOVERNANCE_ROLE, and ADMIN.


To control the visibility of tasks in the board and list view of a project based on security permission settings


The default value is set to false.

  • If set to True, the security permissions are enabled and the users cannot view the tasks in the projects if they do not have authorized permissions.
  • If set to false, the security permissions are disabled and the users can view the tasks in the project.


Assign 'Project admin' privileges to Roles.


The default value is OE_ADMIN.

Click on the field to select a role you want to assign as Project Admin.


To show/hide the display of graphical presentations such as Tables, Files, and Reports on the home page.


  • If set to true, the graphs will be displayed.
  • If set to false, the bar graph of objects will not be displayed.


To set a threshold value to determine the maximum duration an API can take to load. The APIs exceeding the specified duration will be displayed in the API Performance tab.


Enter the value in the field provided (in milliseconds).


To set the maximum number of sample file name logs to be displayed in the folder during the Bucket Analysis process.


Enter the desired value in the field provided.


To enable or disable the Folder Analysis tab from the File Manager and the File Catalog pages


  • If set to True, the Folder Analysis tab gets displayed in the File Manager and the File Catalog pages.
  • If set to False, the Folder Analysis tab will not be shown in the File Manager and the File Catalog pages.


To search for full-text information with or without highlights based on the configured character size.


  • If set to True, the full-text search will be performed without highlights.
  • If set to False, the search will consider a character length of 50,000. It will perform the search using highlights for the initial portion of the text and display the search results with highlights. The remaining portion of the text, it will display the search results without highlights.


To configure the maximum limit for the preferred value for highlights. Adjusting this value allows users to control the extent of text that can be visually emphasized for easier identification or reference.


The default setting is 200,000 characters.

Enter the desired value in the field provided.


Configure the Synonym (Configure Search Keyword) Score in the Relevance score formula to determine the most relevant search results. The relevance score is calculated based on three components: the Elasticsearch score, the popularity score, and the synonym score (if configured).


  • If set to True, the search results calculation includes the Synonym score.
  • If set to False, the search results calculation excludes the Synonym score. Relevance score calculation depends solely on the Elasticsearch score and the settings configured for the Popularity score.


Configure the Popularity Score in the Relevance score formula to determine the most relevant search results. The relevance score is calculated based on three components: the Elasticsearch score, the popularity score, and the synonym score (if configured).


  • If set to True, the search results calculation includes the popularity score.
  • If set to False, the search results calculation excludes the popularity score. The relevance score calculation depends solely on the Elasticsearch score and the settings configured for synonym score.

Advanced Jobs

The latest release introduces a set of new advanced jobs that provide users with even greater control over the feature's behavior. The newly added advanced jobs are listed in the following Table.

Advanced Job


Assign tag to table or column

This job will assign the tags assigned to the column to the table (or) the table tags to columns based on the value given in the attribute.


Enter '1' to assign table tags to columns (or) '2' to assign column tags to the table. Enter value 1 or 2.

SSIS Load Folders Via Bridge

The purpose of this job is to establish a new SSIS connection and to load the projects and packages available in the SSIS connection to the OvalEdge application via Bridge.


SSIS Connection Name/ID

SSIS Folder Path of Bridge

License Type: “True” for auto-build lineage “False” for the standard license 

Bridge ID


This job updates the User License Type based on the current Permissions from previous versions to 6.1.

Attributes: No Attributes are required for this job to run.

Process relationname type 

The purpose of this advanced job is to enable users to explore relation types, such as synonyms, relates to, consists of, etc., between business terms and other data objects or terms. It offers valuable information, including the glossary ID, related object ID, relation type ID, and object type.

Attributes: Not required.

Download Metadata to server 

This advanced job facilitates downloading metadata for data objects, users and roles, or business glossary terms into a .csv file. The downloaded details will be saved to the specified path provided in the attributes.


  • Type: Enter the object type (e.g., Glossary)
  • Path: Specify the download path for the file.

Alert file check validation

his advanced job monitors the addition of any file within a folder stored in the NFS connection. It aims to ensure that files are regularly added to the designated folder. If no files are added within the specified time interval, which can be set, for example, to 24 hours, the job triggers an alert. The alert is sent to users via the system alerts inbox to notify them of the absence of file additions.


  • NFS Connection ID: Specify the ID of the NFS connection where the folder is located.
  • File Name with Extension: Provide the name of the file (including the extension) to be monitored.

Build lineage from python json config files

This job builds Lineage from Python JSON config files


  • Root file path: Specify the root folder path where the Python JSON configuration files are located.
  • Source ID connection: Enter the crawler connector id for the data source involved in the lineage.
  • Destination ID connection: Enter the crawler connector id for the data destination involved in the lineage.


This job is designed to automate the process of extracting relevant data from the DBT file and synchronize it with the corresponding descriptions in OvalEdge.


  • Source ID connection: Enter the crawler connector id for the data source involved in the lineage.
  • Destination ID connection: Enter the crawler connector id for the data destination involved in the lineage.


This job is designed to build lineage by reading JSON files.


  • JSON File Path Folder: Specify the folder path where the JSON files containing lineage information are located. The job will scan and process the JSON files within this folder to build the lineage.

Extract Queries from the source to build lineage

This job is designed to extract unique queries from a data source in order to build lineage. It aims to remove duplicate queries from the query logs and focus on unique queries for lineage analysis.


  • Enter the connection ID of a connector
  • Enter the File name with path which contains the query
  • Enter the query name and query contain columns with comma separator
  • Enter the server type that was used to parse the queries
  • Enter the chunk size to process the queries in batches

Download Metadata to Server

The purpose of the advanced job is to assist users in downloading a "template with data" for multiple configured objects in the specified attribute. The job allows users to download data related to various object types such as roles, users, business glossary terms, data objects, etc., and save it as a .csv file.


  • Object Type: Users can enter the name of the object type they want to download the template for. They can specify multiple object types by separating them with commas. For example, if they want to download templates for roles and users, they can enter "roles,user" in this attribute.
  • File Download Path: Users can enter the path where they want to download the template. This specifies the location on their system where the .csv file will be saved.

Build Lineage from extracted Queries

This advanced job is designed to build lineage using the queries fetched from the previous advanced job, "Extract Queries from the source to build lineage".


  • Vertica connection info ID:  Enter the ID or details of the Vertica connection 
    Enter the Path which generated from advancejob : Extract queries from source to build lineage - Provide the path that was generated as output from the previous advanced job, "Extract Queries from the source to build lineage". This path should point to the location of the query file generated by the extraction job.

 Lineage for Metrics 

The purpose of this job is to analyze the metrics used in a Qlik Sense report and establish their lineage. By examining the formulas associated with each metric, the job will trace the data sources, transformations, and calculations involved in the creation of those metrics.


Qlik Sense Connection Id: Specify the ID of the Qlik Sense report for which the lineage needs to be built.

External Tickets Syncing 

The purpose of this advanced job is to synchronize the status of tickets in an external ticketing tool, such as Jira, ServiceNow, or DevOps. The job facilitates changing the status of the tickets to either "Approved" or "Rejected" based on their current status.

  • Unapproved tickets in the external ticketing tool will be directly changed to the "Rejected" status.
  • Approved tickets  in the external ticketing tool will be moved to the "Approved" status.



  • In Tags | Child Tags | Previously there was no 'X' symbol available to remove child tags. To enhance functionality, we have now introduced an 'X' symbol for child tags. Clicking on the 'X' symbol will trigger a warning popup message, asking you to confirm the removal of the tag. You can choose either 'Yes' or 'No' to proceed with or cancel the removal, respectively.

Data Catalog

  • The Data Catalog list view has been upgraded to provide more options for interacting with data objects. 

    • Quick View
    • Copy to Clipboard
    • Open in New Window
  • If the user navigates to any data object and hovers over the term associated with it, The business description of the term will be displayed.
  • The Data Catalog now allows users to configure the visibility of two additional field columns in the tabular plug-in: 'Empty Count' and 'Zero Count.' These columns can be easily checked or unchecked to enable or disable in the tabular plug-in for Table Columns and File Columns.
  • Nine Dots Menu | Configure Search Keywords | In the previous version only admin users had the privilege to add or remove keywords. With the latest update, business users are now granted the ability to add or delete keywords added to them. However, admin users still retain the exclusive authority to modify or delete any keywords added by themselves or other business users.
  • If users select a data object such as a Database, Table, or Table Column in bulk, and then click on the 9-dots menu and select "Remove Terms" the Remove Term pop-up is used to display all terms available in the application. Now, when a user bulk removes terms, only the terms that are actually assigned to the selected objects will be displayed in the Remove Term pop-up, rather than showing all the terms in a list.
  • Tabular List View | A Cart icon is displayed in the Project or Access Cart which is gray when no data objects are selected and highlighted in blue once data objects have been added to the access cart or default project.
  • Metaread users were unable to Configure the Search Keywords on data objects. Now, any user can configure the Search Keywords on data objects.
  • The configured Code Custom Fields options are now displayed in a small pop-up window. This pop-up window also shows the values that have already been selected, making it easier for users to keep track of their selections.
  • In the Lineage | Tabular View | The Associated Objects column filter has been enhanced to support filtering of ''View Columns'.
  • In the Lineage | The Lineage screen now displays the full name of the source or destination objects in the tooltip, without any character limit. Previously, there was a limit of 30 characters, but now it has been updated to allow unlimited length for object names in the tooltip.
  • Tables
    • Previously, the Nine-dots menu option did not exhibit the "Update Governance Roles" section for a table data object. However, it has been enhanced now with the "Update Governance Roles" option and obtainable within the Nine-dots menu for easy access and managing the governance roles for a table.
    • The sort icon was displaying the list in descending order first, and then ascending order. Now it is improvised to display the list in ascending order and then in descending order.
  • Reports
    • Summary Page | Top Users, for the Tableau reports instead of providing the overall count for individual users, the number of top users were categorized based on their device logins.
    • Now breadcrumbs in the top show the hierarchical navigation path of the Report and highlight the current Report objects in blue. It helps users understand where they are within the report structure.
  • Report Columns
    • A new section has been incorporated to display the “Last Meta Sync Date” to the users.
  • Codes
    • The user was able to add both catalog and uncatalog objects to any active DQR, Now the application prevents the user from adding any uncatalog objects to an active DQR.

Business Glossary

  • Business users with Meta-Read permissions can now view only basic term details. The “View Details” button is disabled for users with Meta-Read permissions, preventing them from accessing more detailed information.
  • Suggest a Term | In the service request template a “Detail Description” field has been added to the template for the selected Domain.
  • Term detailed page | The Nine Dots options now include a “Change Domain” option to change the domain of the term by selecting the desired domain from a list of available options.
  • The term detailed page includes an "Add Objects" button that enables the bulk addition of selected data objects.
  • When a new term is suggested using the “Suggest a New Term option a service request is raised and once the request is approved by the final approver, the user can either choose to publish the term or leave it in draft status.
  • In the service request template, the Domain description is now displayed in the tooltip when hovered over the domain name.

Data Stories

  • The newly added report using the report icon is now displayed as a hyperlink. So, If the user clicks on the report name the application will redirect the user to that specific report. 


  • A new view, "Flow view" is available to visualize data lineage. This view displays the data flow as a flowchart between databases or data sources, providing an overview of the entire data flow, including dependencies and relationships. Moreover, the Flow View features a "Reset" button located in the bottom right corner that allows users to revert the graph to its initial representation, with the ability to zoom in or out using the “+” and “- icons.
  • The Refresh button is now enhanced with a tooltip to display the functionality of the refresh button.


  • The 'Transition functionality' to configure different project statuses flow has been disabled on the Projects page. However, it is now possible to configure it from the backend code for clients who require this feature. Once configured from the backend, the Transitions toggle becomes enabled for each project. It is then up to the user's discretion to enable it for a specific project.
  • In the Projects | List View | Selecting the Business Glossary tab and downloading it would display term-related fields in the Excel file instead of project details. Now, the Project-related column fields are also displayed when the projects related to Business Glossary are downloaded.

Service Desk

  • The object was not appearing in the search results when searching in Service Desk with the full name, despite its availability in the application. However, the Application now displays accurate search results.
  • After approval of a term creation request when a term is created in the application, the steward of the term will be notified regarding the same. If the request is raised by a team, then the whole team will be notified along with the steward.
  • The application will notify the users of any invalid characters added to the fields while raising a service request. This notification will be at the field level.
  • The SLA checkbox provides a way to set specific timeframes for the approval process. It requires approvers to meet certain standards and respond to requests within a designated timeframe and prevent delays and reduce the risk of requests being overlooked or forgotten.
  • For example: if the SLA for a particular approval workflow is set at 24 hours, the approvers assigned to that workflow must approve or reject the request within that time frame. If they fail to do so, an advanced job in the background triggers a reminder notification to the approver every half an hour until approval is done.
  • The Approval Workflow can now be integrated with external ticketing systems such as Jira, Service Now, and Axure Devops. To use this feature users must select the external ticketing system in the 'Approver' field from the drop-down menu while setting up the approval workflow and also provide the “Approved Status” or “Rejected Status” while integrating the service request with external ticketing systems. The service request can be approved or rejected when it matches the Approved/Rejected status.
  • Service Requests can now be grouped based on connection type and approvers in the workflow, thanks to the group feature. Additionally, both system and custom templates now support this functionality.
  • Previously, only users, teams, and roles from the organization were able to approve service requests. However, with the latest update to the Approval Workflow, external tools can now be added as approvers for service requests. Users can select the external tool from a dropdown list of options that are configured to the service request template. If multiple levels of approvers are configured, external tool approvers can be set for different levels.

Governance Catalog

  • Certification Policy
    • Previously, the "Type" column in the certification options displayed all options, including Violation, Caution, Inactive, and None, instead of just the "Certify" option. With the new update only "Certify" will be displayed in the "Type" column.
    • Additionally, when executing a Policy, the job logs weren’t displaying complete information for the trigger object and now the job logs will display the trigger object's complete information.
  • Governed Data Query
    • The viewing of masked table column data in the GDQ results has undergone an enhancement. Previously, users were able to view the masked table column data, but the feature has been updated to prevent this.
  • Users List
    • A new column “Status” has been added to the tabular list view to display whether the user is Active or Inactive.
  • Data Quality
    • In Data Quality | Add Rules now include new fields such as Function, Dimension, Associated objects, Steward, Tags, Scheduling, execution instructions, Violation message, corrective action, alert on failure, and service request on failure. There are new incorporations to this module such as: 
    • Data Quality Objects suggest automated Rules based on OvalEdge data objects. Users can enable, execute, and view reports on these rules, as well as schedule them to run at a specified time. They can also set up alerts and create service requests on failure.
    • The Control Center displays a list of all data quality rules that failed after execution. It provides additional information such as the Connection name, Schema name, and Table name associated with each DQR, along with other self-explanatory columns.
    • The Data Anomalies page lists all data objects where unusual deviations in metadata are detected, such as an increase or decrease in the count of tables or rows. Users can set a threshold percentage for this anomaly, and if the value exceeds the threshold, the corresponding data objects and their details will be displayed on this page.
  • Data Quality Rules 
    • Select any rule that will navigate to the Summary page. On the summary page, the "Control Center" toggle button has been added that allows the failed results of the DQR Rule to be included in the control center.
    • While establishing a DQR users can add the desired data objects to the rule by clicking on the newly added list view.
    • To edit the name of an existing Data Quality Report (DQR), a pop-up was previously displayed. With the new update, users can edit the name by simply clicking on the edit icon, making it an inline edit.
    • When a user enables "Caution Downstream" in a Data Quality Report (DQR), they will receive an alert if any of the cataloged objects fail during the execution of a DQR. The user will see a caution message across all downstream objects.
  • Data Quality Objects
    • Based on the objects selected if the rule type is automated, multiple rules associated with that data object can be executed as a single job.
    • The data quality objects tab displays the automated rules relevant to the selected object and its type. With this enhancement, users have the ability to modify, activate, or deactivate rules. Additionally, all active rules can be executed simultaneously, and the reports generated from these executions can be viewed in the Data Quality Reports. The activation or deactivation of rules can also be executed through the Nine Dots menu.
    • The functionality of deactivating a rule while it is being executed has been modified. Previously, users were able to do so, but now, they cannot deactivate a rule while it is in the process of being executed.
  • Control Center
    • The Remediation Assignee column is now available with an edit icon which allows users to change the Remediation Assignee for the associated Data Quality Rule.
    • Users can now navigate to the Data object summary page by clicking on the Data Object name on the Control Center page.
    • Now users can change the current Status of the Data Quality Rule using the edit icon.
    • When the user clicks on the icon below the "Remediation Plan" column, a pop-up displaying the "Remediation SQL" information will appear.
  • Data Anomalies
    • The user has the ability to update the status of an anomaly by clicking on the edit icon that will appear when the user hovers over the status. The user can also view the status change history within the same popup window as they make updates
    • If the "Run Anomaly Detection" checkbox is selected for any specific data object, then the anomaly detection process will run as a separate job once the profiling is finished. The user can then access all logs related to the anomaly detection process for that specific schema.

My Resources

  • My Profile
    • Profile Picture now supports avatar images, allowing users to personalize their profiles with a custom image.
  • Inbox
    • A new tab "Project Alerts" has been added to easily view all the notifications related to projects and stay informed about any updates or changes.
  • My Permissions
    • A download icon is enabled in all the tabs to allow users to download the details.
    • Table Column, File Column, and Report Column tabs are now available for each governance role. It allows users to view the objects for which they are assigned either as a steward/owner/custodian or other governance roles.
  • My Watchlist
    • Reports tab | Metadata Changes option, users were not getting notified if the Reports are changed from “Active to “Inactive status or vice versa.
    • In Reports the significant Data changes option has been removed.

File Manager

  • The “Upload File” option in the Nine Dots menu has been renamed to “Upload File/Folder”

Query Sheet

  • While in the Auto SQL tab, the user is presented with the option to drag the bar towards the right to view additional information.
  • Previously, in column references, all the previous query history was displayed instead of only the saved queries. However, this has been improved to only show the saved queries in column references


  • When downloading job logs, a spinner icon will appear to indicate that the system is in the process of downloading the logs.
  • The application underwent remodeling to accurately display user status in the job status. Previously, the job status did not provide an accurate description of whether a user was deleted or deactivated when either of these actions was performed in the "Users & Roles" section of the "Administration" tab. However, the application now displays an accurate description of the user status.
  • The “Job Step Name” column field in the tabular list view now displays the Connection Name when the connector is deleted from the application. Previously, after deleting a schema or connection, the Job Step Name column would only show the Connection Id, which could lead to confusion while working with multiple connections.
  • The application presents information about active user sessions and running jobs with a 10-minute refresh interval. This feature will assist the user in monitoring usage and identifying performance-related issues.
  • Whenever an AI recommendation is executed, the log displaying that specific job now displays the corresponding term ID as well.
  • Previously, The logs of Jobs that are either ongoing, completed, or started within the Start time and End time were not displayed.
    Example: If the Job start time is “03-21-2023 2:00 pm” and the end time is  “03-21-2023 3:00 pm” the jobs that are ongoing or were either completed or started between 2:00 pm to 3:00 pm will also be displayed.

Advanced Tools

  • Impact Analysis
    • Impacted Objects | users were unable to download the impacted objects for a single Source ID using the nine dots option. Now,it has been improvised for users to download the impacted objects for a single source ID.
    • The issue where users were unable to add all or a few columns as source objects has been addressed.
  • Build Auto Lineage
    • After using “Correct Query” and clicking the back button, users weren’t redirected to the same object in the tabular list view and instead are directed to a different page. Now the application is working as intended.
  • Compare Schema
    • Users were unable to add the selected objects to impact analysis after executing the schema comparison. Now it is enhanced to allow users to add the objects to impact analysis.
    • When comparing information of a specific schema, the application now displays the count of the tables added, deleted, or modified respectively to that schema.
    • The status of the renamed table was displayed as deleted in the “Action” column. Now the status displays accurate details about the action.
  • Deep Analysis Tool
    • The Advanced Tools module is now implemented with the 'Deep Analysis Tool' to display the transactional data changes in the application UI. To view this tool in UI, users must configure the key value of “enable.deepanalysis” as true in Administration | Configuration | Others.
    • Now users can filter the result in the remark column along with the schema name. The sort option is also available to sort the updated date, table name, and column name. Users can also download the deep analysis result using the download icon available at the bottom of the page.
    • After executing a deep analysis, the remark column in the table change summary will now display the count of the columns that underwent transactional data changes along with the remark.
    • For the Business Glossary template, the Excel sheet tab names have been renamed from ‘Term Relationships’ to ‘Related Objects’ and ‘Term Objects ’ to ‘Associated Data’ to match with the application’s UI terminology.
  • Load Metadata from Files
    • After adding a tag to a report/report column using the template with data, when the user tried to remove the same tag using the template, the tag name was reflected in the UI. The tag name is no longer reflected in the UI.
    • When attempting to update technical descriptions of table columns via load metadata from files, users experienced issues with the information not being reflected in Data Catalog. However, the application has been modified to ensure that the technical descriptions are accurately displayed in Data Catalog.
    • Previously, when downloading a Template or Template with Data for the Table Columns, there was a mandatory field called "Column Type." However, as an improvement, this field has now been made optional.
    • When any Template with Data for any data object is downloaded, a notification email will be sent to the respective admin user notifying about the job status and template details.
    • While uploading files via Upload File or Folder and Load Metadata from Files, if the file contains any malicious content those details will be displayed in jobs logs.


  • Connectors
    • Database Type field has been newly added with Regular and Unity Catalog options - Regular option fetches all the schemas from the remote source. On the other hand, the Unity Catalog option allows users to access tables from nested schemas in the remote source.
    • Previously, the application would encounter an error while performing a crawl/profile through Impala Kerberos Connector and Hive Kerberos connector. However, modifications have been made to the application to eliminate the error and restore its functionality as intended.
    • A new check box for "Technical Description" has been added to the crawler page.
    • Users have the capability to establish lineage for data sourced from a JSON file.
    • The Crawl option has been renamed to “Action” in the "Select Important for Crawling and Profiling" section.
    • Previously, when the user performed profiling using the Hive Kerberos Connector, the job was not being executed and the job logs were displaying an incorrect table alias. However, improvements have been made to the system and the job logs are now displaying the correct table details.
    • Users were unable to validate the connection using OKTA. Now it is improvised with a client credential flow to validate the Power BI connection.
    • The managed connection form is enhanced with a dropdown parameter called Alteryx Files Type has been added with a value of one drive that enables the upload from that one drive.
    • The "Manage Connector" pop-up for the Salesforce Connector, the "API Version" dropdown has been updated to include the latest versions of the Salesforce API in the menu
    • A tabular plug-in displays columns for various connector types, including Base Connector, Auto Lineage Connector, Data Quality Connector, Data Access Connector, Integration Admin, and Permission.
    • The system was unable to parse a few views resulting in the failure in lineage building.
    • Manage Connection window, the SQL Server Windows Authentication and Informatica connectors have an OvalEdge Environment dropdown menu. The OvalEdge Environment dropdown menu is used to select the operating system on which the OvalEdge application is installed, such as Windows, Linux, or Unix.
    • WebFocus Connector now supports Delta Crawl for fetching only newly added and updated data objects from the remote system, reducing the amount of data transferred and improving performance.
    • For the Qlik Sense connector the Alias Host Name is newly added to the managed connection pop-up window to enter the alias hostname for establishing a connection with Qlik Sense.
    • The Connectors module now includes a “Connector Health” column that allows users to easily identify and resolve any validation issues. The column displays a round icon in either red or green, indicating whether the connection has been successfully validated at the scheduled time or not. Users can click on these icons to view a pop-up window that displays complete information about the executed job, including the time and details, as well as the mode (Manual or Auto).
    • The latest update to the Salesforce connector, which includes two important features. First, JWT authentication is enabled, allowing users to securely authenticate with Salesforce using a JSON Web Token. Secondly, File Path validation, ensuring that files uploaded to Salesforce are validated for security and compliance purposes.
    • Earlier, the application was unable to retrieve descriptions for Functions, Procedures, and Views during crawling. However, the functionality has been enhanced and the descriptions are now being fetched accurately.
    • An implementation has been made to integrate AWS Secret Manager connector for Power BI and IICS (Informatica Intelligent Cloud Services). This integration allows users to seamlessly connect and access AWS Secret Manager from both Power BI and IICS, providing enhanced security and centralized management of secrets and credentials. 
    • A signature request for a new implementation, aiming to synchronize Databricks certifications for tables and import those certifications to OvalEdge. This request has been successfully implemented.
    • To support the Azure Key Vault, additional connection attributes were required. We have made the necessary improvements to accommodate this requirement.
    • For Athena Connector, previously, OvalEdge did not support executing queries on Athena. Users can now execute queries on Athena. It is important to note that the update queries operation is still not supported in this implementation. 
    • HashiCorp Connector
      HashiCorp Vault is implemented to read Data source (Such as Oracle, and Snowflake) Connection passwords. The Key-value secret credentials generated in the HashiCorp instance are used to access the data source. When establishing a connection with a data source, the OvalEdge application makes a call to the HashiCorp in real time to read the secret credentials.
      • Create Key-Value secret credentials with data source details in the HashiCorp Vault.
        In the OvalEdge application, connect to the HashiCorp Connector and configure the secret details generated in the HashiCorp Vault.
      • In the Administration Connector, Hashicorp is integrated into the OvalEdge Manage Connector form. The Hashicorp is implemented using the Hashicorp Base URL and Token generated from the Hashicorp website.
      • Vault Base Url*: It is the server name/URL to connect to the HashiCorp connector.
        Vault Token”: It is the vault token generated in the HashiCorp instance.
    • Success Factor Connector
      SuccessFactors is a cloud-based human capital management (HCM) system provided by SAP, and it offers APIs based on the OData EXT API for integration and data retrieval purposes.
      • To crawl and profile data from SAP SuccessFactors, the initial step is to establish a connection with SAP SuccessFactors using the ODataExt API authentication mechanism that requires Username, Password as well as API Endpoint URL.
  • Users & Roles
    • While creating a new user now the default avatar will be displayed based on the first letter of the first name.
    • Users & Roles Management, the column "Privilege"  is Changed to "Connector Account Privileges".
    • Previously, new OvalEdge users were created for every remote user encountered during the crawling process. Now, only existing OvalEdge users will be matched if there is a match in the remote username or email, without creating new users.
    • Users & Roles Management | Previously, when crawling, a new OvalEdge user was created for every remote user encountered. If the remote username or email matched with an existing OvalEdge user, the remote user was mapped to the corresponding OvalEdge user. Now, the process only maps the remote user to an existing OvalEdge user if there is a match in the username or email, without creating a new OvalEdge user for each remote user encountered.
    • The user name is now displayed in ‘First name, Last name’ format. Additionally, in the entire application, the naming conventions will be applied to the rest of the Governance roles.
    • In Users & Roles | Connector Roles | The column header has undergone a modification, and it has been updated from "Privileges" to "Account Privileges."
    • An enhancement has been implemented in OvalEdge, introducing a new role called "User & Role Admin." This role is specifically designed to manage and control users and roles within the OvalEdge application, similar to the existing Tag Admin role. By assigning the User & Role Admin role, the user will have privileges to access and manage the Users & Roles tab of OvalEdge, providing enhanced control and administration capabilities.
    • In the Roles tab | Users now have the ability to delete multiple roles at once with a single click. Checkboxes have been enabled next to each role, allowing users to select multiple roles for deletion.
  • Security
    • Admin users can now use the Administration > Security > Application module to give access to Business Viewers or Roles providing greater flexibility and control in managing user permissions. Business Viewers logging into the application will now have access to the following modules: Home, Tags, Data Catalog, Business Glossary, Data Stories, Dashboards, Service Desk, and a few sub-modules in the My Profile module. 
    • When users edited User/Role permissions on data objects, they were not redirected to the edited role item page. Instead, they were navigated to a different page. However, the navigation has been improved and users are now directed to the correct page after making updates to role permissions on data objects.
    • Users can be granted privileges to create a new domain specific to their roles.
    • The “Add User Access option” is now available in the apply security pop-up window for Folders, Report Groups, Reports, Domain, and Story Zone.
    • The governance roles columns are modified with a search option for all the tabs.
    • Permissions pop-up Role Name column label has been changed from “Roles to “Users Roles
    • Default roles applied to data objects can grant access to all data objects based on privileges. Thus, Default Roles will not be applicable to the Applications module.
    • Approval Workflow | The “Approved By” label is renamed to “First Approved By”.
    • In Security, the Connector Creator role now has the ability to view all connection names on the Connector page. Also, the Domain Creator role now has the ability to view all domain names on the Security > Domain page.
  • Service Desk Templates
    • The Service Desk Template, now lets users integrate one service request with multiple ticketing systems. ie, a table access request can be integrated with JIRA, Service Now, and Azure DevOps Integration.
    • An additional column is provided for the users to add any comments while approving or rejecting a service request.
    • In the "Mapper" section, if a mapper (such as ServiceNow) is configured, clickable options for other mappers (such as Jira) should be disabled. This feature has been implemented, The clickable options are now disabled.
    • OvalEdge will provide hourly notifications to users who have missed to define the Service Level Agreement (SLA) on a Template.
    • While editing a Service Desk Template, custom fields were selected. However, during Field Validation, a message appears stating 'Min-Max number of JSON API Required'. It was functioning properly when configuring validation for number custom fields with Min-Max Number Range. The application has since been updated to properly handle Field Validation for custom fields, and users will no longer encounter this message.
    • Now seamlessly integrates with AzureDevOps, allowing for the creation of new tickets with ease. When a ticket is generated in the Azure DevOps service desk using this template, the standard message is enhanced to display additional information.
    • The updated message now states, “This ticket has been raised in OvalEdge and has been posted for fulfillment. Please feel free to contact your designated OvalEdge Platform SPOC (Single Point of Contact) for any inquiries."
    • The user will be given the option to customize the description of any service ticket they raise according to their needs.
    • A new "Additional Information" field in the template lets users store stage-related data in JSON format. This includes icons, names, and descriptions for customized and structured storage. Users can configure field settings like visibility, editability, and mandatory status for each stage of creation, fulfillment, or approval. Visibility determines when the field is displayed, editability controls whether users can edit, and mandatory status decides whether the field is essential for raising a service request.
    • The new enhancement allows users to integrate a service request template with multiple external systems, such as Jira, ServiceNow, and Azure DevOps. This provides flexibility for users to switch between different external systems tools and work within their preferred system. Configuring the integration options for each external system is simple and intuitive. The integration status can be set to "Active" or "inactive", and users can easily activate, deactivate or delete an integration as required.
    • When editing a Service Desk Template, the options to "Move to Draft" and "Delete Template" options are grayed out and disabled for system templates, since they are uneditable and permanent.
    • The Service Desk Templates tabular list view previously had a search icon for the columns fields 'Created By" and "Updated By". However, an update has been made to replace the search icon with a filter icon for improved functionality and user experience. 
    • Initially, the service desk template had the default priority for a ticket set as Medium. However, we have made improvements to allow you to modify the priority status according to your requirements. You now have the flexibility to choose from a range of priority options, including Highest, High, Medium, Low, and Lowest, enabling you to accurately reflect the urgency and importance of each service request.
  • Advance Jobs
    • As the existing advanced job was not supporting the bridge component. Now, ”LoadSSISFoldersWithBridge” - advanced job is added to support the bridge component
    • Added a new Advanced Job - Report Download Job - To download reports with report names, business descriptions, and labels.
    • An enhancement has been implemented to facilitate the replacement of the character '/' with '_' in table names when building the lineage between SAP Hana and Redshift connectors for tables which have the same name.
  • Custom Fields
    • After adding code custom fields and saving it, the checkboxes for Multi-select, Editable, and Viewable are now enabled.
  • Configuration
    • A new tab has been added to the Notifications tab for the admin users to customize the notification messages that are sent out to the users. In addition, the body of the message can be modified using the variables provided in the right panel. These variables correspond to particular events or features; administrators can drag and drop them into the message body. The notifications can be customized for three different mediums: Inbox, Email, and Slack, with each medium having its own tab in the Notification Templatization section, users can access it by hovering over a function and using the edit icon.
    • The user can set the threshold percentage for detecting anomalies based on the percentage of change from the last recorded value.
    • Few of the configurations the parameters drop-down displayed all the available roles, now it displays user roles associated with Author and Analytical Licenses. This change is designed to restrict access to certain functionalities to only authorized roles.
    • A History icon is enabled next to every configuration that allows users to view the history of changes made to a particular configuration about when each version was created and who made the changes. This feature can be particularly useful in situations where multiple users are making changes to the same configuration or when it is important to track changes for auditing purposes.
  • Miscellaneous
    • OvalEdge will now notify the users when the SSL certificate is approaching its expiration date.
    • In the OvalEdge Application | By hovering over any data object users can find additional functionalities such as Copy to clipboard, Quickview, and Open in a new tab.
    • The Data Quality Objects, Projects, and Impact Analysis modules are now improvised with the “Add objects” option to assist users to select multiple objects (maximum 20) at a time. Once users click on the “Add objects” button, the 'Add Object' pop-up window displays the list of all the available objects where users can click on the copy icon to select the objects.
    • OvalEdge has now included a “Base URL” of the server in the emails they send to users, which directs them to the server from where the email originated. 
      Example: If a user receives an email from the server amazonses.com, This information will be displayed in the email along with the OvalEdge email address.
    • Across the application when a search filter is used, the tooltip now displays selected options separated by commas for users to see which options are currently selected.
    • In OvalEdge Application | If  a user is moved from one group to another (for instance, from Author to Business) in OKTA Authentication, The same information will be dynamically updated without requiring the user to log into OE.
    • The interactive/clickable options are now highlighted in blue color when users hover over them across the application.
    • The username will now display the first name and last name separated by a comma instead of the email id in the “Created By” column across the application.
    • When the login page is left idle for 10 minutes and a user attempts to log in with valid credentials, the application previously generated an error message indicating "Invalid Credentials." However, the application has been modified to display the message "Session Expired" instead. As a result, users may refresh the page and successfully login without encountering any obstacles.
    • If a user is logged into both the application and the Chrome Extension simultaneously, logging out of the application will automatically log them out of the Chrome Extension.

Bug Fixes

Release 6.1 has resolved all of the listed bugs, resulting in the OvalEdge Application now functioning as intended.


  • While performing basic operations such as search and filter on the table column, the application experienced a 500 error and was running at a slow pace. This issue is fixed and the application is performing as intended.
  • The user was not able to find the SSIS package name consisting of spaces and special characters, in the Global Search, Build Lineage page, and Catalog Search. Now, this issue is resolved and the SSIS package name is displayed.


  • Catalog Highlights section |  Previously, there were issues with the Top/Recent toggle button in the Catalog Highlights section:
  • The top results associated with different data objects such as Tables, Reports, Files, Codes, etc., were not displayed properly.
  • When the user toggles between Top to Recent or vice versa, data in the catalog highlights section is not refreshed.
  • However, these issues have been resolved now and the results for the data objects are being displayed according to the user's choice of top or recent.
  • If a project with applied advanced search filters is added to bookmarks, there is an issue with the bookmarked page not loading with the same filters selected.
  • In Global Search | The user encountered an error when trying to navigate from the table column tab. This error was caused due to the Liquibase Time Script run failure.
  • Nine Dots Menu | When a user tries to remove any term using the nine dots menu the application was not returning a confirmation message, This issue has been resolved and the application will return a message confirming the removal of the term.
  • In the “Add Term” pop-up window, the terms that were selected and added from the left existing term options drop-down were not displayed in the right panel. Now, the issue is resolved and the added terms are displayed in the right panel.
  • Nine Dots Menu | Add to Impact Analysis now displays pagination numbers displaying the current page and the total number of pages available.
  • In Global Search | When the user searches for data using the global search, the application is returning irrelevant information. This issue is fixed.


  • There was an issue where the data objects associated with the current tag were not shown in the Association's sections. The issue is now resolved, and the data objects associated with the Tag are displayed properly.
  • When governance roles were edited in the DAG tag tab, the updated Governance Roles did not appear in objects assigned to the DAG tab. This issue is fixed and now if the user changes the Governance role, objects assigned to this DAG tag will be updated in the data catalog.
  • If a tag is deleted, it is audited in the Audit Trails, which makes it impossible to create a new tag with the same name through Load Metadata from Files. It was showing an error that 'Tag with this name already exists in the audit trails', this issue has been resolved.
  • The Tag Summary page displays the number of Tags associated with different types of objects, such as Databases, Tables, and Reports. If there are no Tags associated with an object type, it displays a count of 0. 
    Example: If a table has five associated tags, the table tab count will be five. Currently, the Tag Summary page displays the Database tab even if there are no associated Tags. However, it should show the object type tab with a count of association greater than 0 by default. To prioritize showing data objects with counts greater than 0, the default tab on the Tags page needs to be changed.

Data Catalog

  • If the user navigates to any data object and hovers over the term associated with it, the business description will be displayed.
  • Despite the job logs indicating a successful update, modifying and uploading a "Schema Template" in Data Catalog fails to upload the data to the application. This issue is now fixed.
  • Previously, the business user who had Meta read permission was unable to view preview results following profiling. Despite having permission, the application displayed “0” instead of any data. Fortunately, this issue has since been resolved.
  • Previously, when a user clicked on the expanded menu, it would automatically navigate to the Database Tab without waiting for the user to make a selection. However, now it correctly navigates based on the user's selection in the expanded menu. If the user selects the Table data object then it navigates to the Table tab instead of the Database Tab.
  • The data is not getting updated when the user attempts to upload metadata using the Table Template, This issue is fixed 
  • The Code Custom Field search filter wasn't operational when code custom fields were populated to the tabular list view using Configure view. The issue has been resolved and is now functioning correctly.
  • In Data Catalog | After crawling a Power BI report, the report visualization is not getting displayed on the Data Catalog Report. Now it is fixed to display the visualization of the Power BI reports.
  • Databases
    • When the user clicks on the sort icon to sort the Schema column, the sort icon does not work properly. This issue is now resolved.
  • Tables
    • The issue with the column tab for the crawled data of the MySQL connector was not indicating the "Key" column as the "Primary Key". However, this issue has been fixed.
    • Duplicate data is displayed when crawling using the Salesforce Connector and this issue is now fixed.
    • In spite of the user having Meta Write Data Read (MWDR) permissions, the Download option of the Nine Dots was disabled, preventing the user from downloading the data. However, this issue has now been resolved, and users with MWDR permissions can download the data object data using the Nine Dots option.
    • Users with MWDR permissions were able to download the data. Now it is fixed to disable the Download Data option from the nine dots for the users with Data Read permission.
  • Table Columns
    • The tag assigned to a specific table column is not automatically assigned to the corresponding Table. This issue has been resolved.
    • The sorting option was not working on the “Status and “Created On” columns. This issue is now resolved and the sorting options are working as expected.
    • The tag that is assigned to a specific table column was not automatically being assigned to the corresponding table. However, this issue has been resolved and the tag is now assigned to the corresponding table as well.
  • Reports
    • When the user crawls the reports using the Looker connector, the application displays duplicate columns. This issue is fixed and no duplicate columns are being displayed.
    • Previously the filter conditions to sort the Path column in the Data Catalog and Security were missing These filters are now added as text values such as "Starts With" and "Contains". The update has resolved the sorting issue and now accurate sorting results are displayed.
    • When the user clicked on the highlighted text in the graph on the Summary page of any report, an error occurred, and the report was not properly displayed. The issue was caused by an incorrect request parameter. However, this issue has been resolved, and now the user is able to visualize the report.
    • Users were facing an issue visualizing reports once the lineage when built for Qlik Sense, This issue is now fixed.
    • After changing the column status from active to inactive, users were allowed to select the visible type, instead of displaying the visible type as 'invisible'. This issue has been resolved and the application is working as intended.
    • The user was unable to view pages in the Data Catalog after crawling the PowerBI reports. Also, was unable to build the lineage for PowerBi Reports. (Downstream lineage). This issue is now resolved and the user is able to view pages after crawling has been performed.
    • When a Qlik Sense connector is crawled, the Report Types column does not display the filter option for all types except "NON-Tagged", "Dashboard", and "QVDBuilders". 
    • In the Build Auto Lineage page, the Object Type column does not accurately display the expected object types, such as "NON-Tagged" or "Dashboard".
    • However, Both the issues regarding the Qlik Sense connector are resolved and the application is working as expected.
  • Report Columns
    • The user was unable to view report columns in the Data Catalog after crawling the PowerBI reports as a specific file named “PBIX” was not available. This issue is now fixed.
    • In the association tab, the user was unable to search table names to associate them with the code from SQL Server table objects. This is fixed and displays the table names accurately.
    • When the user tries to add a term to the object using the  “Copy Title to the Catalog” and remove that term, then the term was removed from the object and the title of the term was displayed. Now, this issue is resolved and when the term is removed even the title of the term is removed.

Business Glossary

  • When Objects are added to the related objects for a term then it should display the full path as Table > Database_Name.Schema_Name.Table_Name. This issue has been resolved and the path is displayed in the format mentioned above
  • Data Objects tabs, the recommended data objects from AI recommendations are not appearing in the Associated Object list, even though they were accepted. This issue is now resolved.
  • Recommendation was working only for the admin user and not for a team. Now, the recommendation works for team users.
  • Related Objects | When adding related data objects, the Save button was not enabled but the data objects were still added without it. This issue has now been resolved and the data objects are added as related objects only when the “Save button” is clicked.
  • When a user clicks on a term and adds associate data for tables and columns the count is not visible on the summary page. This issue is resolved and the summary page is now displayed.
  • For Table, Columns are not displaying the correct count of data objects and are showing zero even if there are multiple data objects associated. This issue is now fixed and the right count of data objects.
  • When a user created a custom view and changed the order of the fields displayed in the default view (for example, moving the Custodian field to the start and Domain to the end), the field order was reversed from what the user specified. Additionally, if the user attempted to reorder a field (such as moving Status to the top) and then saved the view, the fields were saved in the wrong order. However, these issues have now been resolved, and the "Configure View" feature is functioning as intended.
  • After downloading the Term Details in an Excel sheet, the “Show Classification” column displayed “No” despite enabling “Show classification at dictionary” under Manage Data Associations. Now it is fixed to display “Yes” when the “Show classification at dictionary” option is enabled.
  • When a user adds related tables for a term, the information about those related tables do not appear in the Data Catalog > Table > References tab. However, this issue is resolved and now the related tables added for a term are displayed in the Table References tab.

Data Stories

  • The application would freeze when formatting a table, requiring the user to perform a hard refresh to get the application running again. This issue has been resolved and the user no longer needs to reload the application.


  • The Data Quality Score Report was not displayed due to a due internal server error message. The internal server error message is resolved and the user is able to view the Score report.

Service Desk

  • While creating a term creation request for the same term name under the same domain, the application was not showing any error message. Now it is fixed to display the error message.
  • For an API Access Request using a custom template, while adding a team to the approval workflow, the users removed from the team were also displayed. Also, after the request was raised, in the service request summary, duplicate users were displayed under the approval workflow.

Governance Catalog

  • Data Classification
    The page does not refresh automatically when a tag is added to the domain from the nine dots menu, If a manual refresh is performed the newly added tag is displayed. However, the respective domain and its details are not displayed. This issue has been resolved and the respective domain details are being displayed.


  • After a user selects a domain and adds it to the relevant category, and when the user again tries to select another domain from the dropdown, the category field is not reset. This issue is now resolved, and the category field is being reset.
  • Previously, when users with the OE_Public role attempted to delete a project, an error would occur. Similarly, when users tried to delete it from Projects | Board View | Nine dots, a success message stating that the selected project was deleted would be displayed. However, despite the message, the deleted project would still appear on the Projects page. This problem has now been resolved and deleted projects no longer appear on the Projects page.

File Manager

  • For NFS Connector, when a user uploaded a folder with files, only the folder was displayed in the File Manager, but the files within that folder were not visible. However, this issue has now been resolved, and both the folders and files are displayed in the File Manager.


  • The appropriate job log was not getting displayed while creating a Business Glossary Terms, which is fixed, and accurate job details are being displayed.
  • The Job status was not changing from the “INIT” state to others after crawling a database from Administrator | Connectors, The status of the job is now displayed accurately.
    • A similar issue while crawling a Power BI report the job logs were not displaying the right values has also been addressed and fixed.

Advanced Tools

  • Impact Analysis
    • The schema name for the impacted objects was not displayed on the downloaded impact analysis results, for better understanding for the user the schema associated with the impacted objects is now being displayed.
  • Lineage Maintenance
    • When a user clicks the Add button, the “Add / Edit Column Mapping” window is displayed. As soon as the user added a new row, it was appended at the top of all the existing rows rather than at the bottom. This is now resolved and the 
  • Build Auto Lineage
    • In the Build Auto Lineage feature, there was an issue where selecting codes on the Build Auto Lineage page resulted in all codes from other pages being automatically selected, even if the user did not explicitly choose them. As a result, the lineage was built with unintended code selections. However, this selection process has been improved, and now the lineage is built only with codes that the user explicitly selects.
    • In the Build Lineage > Correct Query page, the queries containing "Create or Replace view" statements are failing. This issue is now fixed.
  • Compare Schemas
    • Users were unable to add objects by grouping like adding deleted objects, adding modified columns, or adding new objects to impact analysis as affected objects. Now a provision has been provided.
    • After adding the “Changed Column to Impact Analysis”, the incorrect connector name was displayed for the objects in the Impact Analysis summary page. Also, when the user downloaded the impact analysis results, all results for the affected objects were not displayed
  • Load Metadata from Files
    • There is an issue adding tags to a term through the "Load Metadata from Files" Business Glossary template. Now, this issue is resolved now, and the application is working as intended.
    • There was an issue where the Tags associated with Report Column were not being displayed when the column was downloaded. This issue is now fixed, and the tag associations are being displayed.
    • When a template lacks a header, the expected behavior is for the job to fail. However, the log displays random table names that are not present in the CSV (even though no metadata is actually modified), and the job is erroneously marked as having run successfully. This issue is now resolved and the job log displays accurate details.
    • The application was returning an error when users were trying to download the Business Template. However, This issue is now fixed and the users are able to download the template.
    • The application is returning an error when the user attempts to upload metadata using the Table Template, This issue is fixed, and the users are now able to upload metadata using Table Template.
    • Report or Report Column Template, when attempting to download the template along with its data users encountered an error message stating "Error while creating a CSV file". This prevented users from downloading the file. However, this issue has now been resolved, and users are able to successfully download Report or Report Column Templates along with their data without encountering this error message.
    • The associated tag name in the application was different from the template with data for a few objects.
    • The detailed description of a tag was not getting updated using the business glossary template.
    • Users were encountering 504 gateway-timeout errors while downloading the business glossary template. This issue is now fixed and the users are able to download the business glossary template.
    • The heading of the “Cronentry” column in the “Data Quality Rule template without data” is modified with the scheduling pattern to assist users to enter the time to schedule the Data Quality rule in the given pattern.
    • After downloading the Business Glossary template with data the “ADD” option was displayed in some other field instead of the 'Action' column. Now, it is fixed to display the “ADD” option in the “Action” column.
  • OvalEdge APIs
    • Tags API | The “GET APIs” are displaying the Parent Tag names and Child Tag names as Null causing confusion in knowing the hierarchy of the respective tags. However, This issue has been fixed and the accurate names of the tags are being displayed.


  • Connectors
    • For the Looker Connector, previously, the system was unable to retrieve all the Report Groups defined in the data source and display them in the Data Catalog Reports. However, this issue has now been resolved, and the system is able to fetch all the Report Groups that exist in the data source and display them in the Data Catalog Reports
    • When editing the connection details of an ADP connector, an error "Error while fetching existing connector Attributes: null" is displayed, This issue is now fixed.
    • The tables for the SOP BOD Connector were missing when users attempted to run the temporary lineage correction due to a versioning issue. This issue is now fixed.
    • While building lineage, the Looker connector queries failed due to an error in a query. The errors in the query have been fixed and the application is working as expected.
    • There was an issue with creating temp lineage tables in the Tableau connector for building lineage from Tableau to Snowflake. This issue has been resolved and the users can now create temp lineage tables.
    • While validating the Power BI reports there were some null exception errors. Now it is fixed and users are able to validate the Power BI connection.
    • The AWS Secret Manager Connector failed to establish a valid connection for the Role Based Connection type, although the bridge was running(Live).
    • For SQL Server Connector, the lineage for column level mapping between (#FBNK_ACCOUNT_PREV to #ALL_FBNK_ACCOUNT) was not displayed correctly, even though the selected query was corrected and validated for the SQL Server Connector. This issue is now resolved and the column level mapping is being displayed accurately.
    • The application is returning an error when the users were crawling the data source using the Tableau- Online Connector, This issue is now resolved and users can now crawl the data source and the application is not returning any error.
    • If the user encounters any authentication error while accessing the Data Source and attempts to crawl the metadata on the Ovaledge application with the Powerbi connector then the already existing metadata is getting deleted. This issue is fixed and the existing metadata is no longer getting deleted.
    • The application is returning an error when the user is attempting to build lineage using the Denedo Connector, This issue is now resolved.
    • The Deltalake Connector supports two types of databases (Regular and Unity). On the Manage Connection Form, the Deltalake connector was added and validated with the Regular Database type. When the user attempted to change the database type from Regular to Unity, an error message appeared. This issue is fixed and now using the drop-down menu user can change to another database type.
    • The Workday Connector cannot be validated and no validation errors were being displayed. This issue is now fixed and the users are able to validate the Workday Connector or  the application is returning validation errors if the validation fails.
    • In the Azure SQL Managed Instance Connector, When profiling the views, there are no profile results, now this issue has been fixed and the profile results are being displayed.
    • Duplicate unprocessed lineage status was shown. This issue is fixed and lineage is displayed accurately.
    • While Adding a Connector, even if the user did not select any roles for Integration Admins and Security and Governance Admins - which are both mandatory - and clicked on the Validate button, the connector would still be validated successfully. However, this issue has now been resolved. If the user attempts to validate the connector without selecting roles for Integration Admins and Security and Governance Admins, an error message is displayed prompting the user to choose these roles. This issue is now fixed.
    • While building the lineage using Tableau Connector, The existing table is not sourced instead a temp lineage table was created.
    • In Settings | Profiler, when a user attempts to change the Profile Type, an error message "Problem Updating Settings" is displayed. However, this issue has been resolved and users are now able to successfully change the Profile Types - such as Auto Disabled, Query, and Sample - for the connector without encountering this error message.
    • The PowerBI Connector crawler page and Auto Build Lineage were not displayed as a specific “PBIX” file was missing. This issue is now fixed and the option is getting displayed.
    • For Azure Data Lake Connector, there was an issue where even if the user provided the correct details for establishing an Azure Data Lake connection, a validation error would occur and the error logs would not be displayed. However, this issue has been resolved, and now users can successfully establish an Azure Data Lake connection with the storage account details without encountering validation errors or missing error logs.
  • Users & Roles
    • Users when a new user was added by providing their details and email address, the user would receive an email containing the OvalEdge Application URL and their login credentials. However, when the user clicked on the URL in the email to access the OvalEdge application, the URL was not working, and the user was unable to access the application. But now this issue is resolved, and the URL received by the newly registered user to access the OvalEdge application is working properly.
    • The "Role Description Field" was taking up excessive space, which has been fixed.
    • When a user is deleted, the permissions of a user associated with a database object are transferred or deleted and permissions for those users should not appear in Administration > Security.
    • An error message "There is some problem getting a result" appears when selecting a Snowflake connector and when clicking on the connector role tab.
  • Security 
    • When a user clicks on the Edit icon in the Available Roles Column to apply security, previously the User Interface navigates to an “Update Permission” page for Database, Tables, Files, and Folders, while a pop-up window appears for other remaining data assets. However, for improved user experience, now the Update Permission page has been made uniform throughout all the available data assets in the Security module.
    • The “Reset Permission” option is added, and “Removed Update Permission” is removed from the Nine dots options and a custom row access policy section has been added. 
    • In the Edit permissions screen for all RDAM connectors, the 'Role Name' label is changed to 'Users / RoleNames'.
    • The OE Admin user was unable to see the tables in the tables tab. This issue is now resolved.
    • Replaced the Save button with a checkbox, making it more intuitive for users to save their changes. There were issues with saving a category and sub-category name that has been selected but modified. Now, the issue is resolved and working fine.
  • Service Desk Templates
    • A new custom template is added and published at the connector level for the Object type table. Although the template was published, it was not available at the connector level. Now, this issue is resolved and the template is available at the connector level.
    • The Business Glossary template  header was not properly visible due to the black font color and blue background cell color. To improve the visibility of the template headers, the font color has been changed to white, and the cell background color has been changed to blue.
    • Term creation request, it is possible to clone a Term Creation Request template. When the user edits and saves the cloned template, it displays an error message due to missing fields during the cloning. This issue is now resolved.
  • Advanced Job
    • When building lineages that have the same table and column names, the source is now considered to be Folder ID instead of File ID, and the log value is displayed in Job Logs.
    • There was an issue executing the advanced job - “Get queries from Vertica by using a query”, This issue has been fixed and the users are able to execute the advanced job.
    • In the deep analysis project, when a single schema was selected for crawling, the job log showed proper results, but when multiple schemas were selected, String Error was displayed. This issue is fixed and it shows the correct result when multiple schemas were selected.
    • The source is considered Folder ID instead of File ID when building lineage with the same table and column names, and the log value is displayed in Job Logs. This issue is now fixed and the right source is considered to build lineage.
  • Custom Fields
    • When local custom fields are added to a connector, the default (existing) global custom field values are not displayed. Adding a custom field should not remove existing values. It should display both local and global custom field values.
  • Configuration 
    • The "ovaledge.connector.creator" and "ovaledge.domain.creator" should be limited to the "Author User" value, specifically OE-ADMIN. If any value other than "Author User" is entered, an error message should be displayed to indicate that the role is not available. This issue is now fixed and an error message is displayed if the role is not available.
    • The issue where the "Updated By" and "Updated On" columns fail to update automatically when the user makes changes to the "Value" column is resolved.

Copyright © 2023, OvalEdge LLC, Peachtree Corners GA USA