OvalEdge Releases

Release6.1

OvalEdge Release 6.1 introduces a wide array of advanced features and improvements, all aimed at elevating your data management experience. Below is a comprehensive overview of the enhancements included in this release:

  1. Administrative Roles Enhancements: Admins now have the ability to assign specific roles and permissions to users at the connector level, providing better control over access and data governance.
  2. New Licensing Model: The introduction of the Viewer and Author Licenses offers more flexibility in managing user roles and access rights.
  3. Improved Data Quality Rules (DQR): The DQR functionality has been updated and enhanced, allowing users to create and manage data quality rules, ensuring better data governance practices.
  4. Enhanced Service Request Template: The Service Request Template feature has been significantly improved, supporting multiple external service integrations and enabling users to set external service systems as approvers in the approval workflow, streamlining the service request process.
  5. Customizable Notification Templates: Users now have the option to customize notification messages received via Jira, ServiceNow, Azure DevOps, and other mediums, tailoring communication to their specific needs.
  6. Deep Analysis Tool: A powerful addition to the platform that fulfills the need for a more in-depth and comprehensive examination of data changes within a specific schema. It extends the functionality of the Compare Profile and Compare Schema tools.
  7. Connector Health: OvalEdge now provides a Connector Health feature, indicating connector status based on overall performance. Users can monitor factors such as data transfer success rate, response times, error rates, and data handling efficiency for smooth data pipelines and workflows.
  8. Enhanced Global Search: The Global Search feature has been enhanced to offer global bookmarks for quick access to different environments. Users can now search for terms using classifications, allowing them to find specific terms categorized under different classifications. Additionally, customization settings for search results have been improved, enabling users to fine-tune their search preferences.
  9. Bug Fixes and Improvements: OvalEdge Release 6.1 addresses various bugs and issues reported by users, ensuring a smoother and more reliable experience for all.

Release Details

Release Type   

Release Version

Build

<Release. Build Number. Release Stamp>

Build Date

Minor Release

Release6.1

Release6.1.6100.4ac0fe3 03 July 2023

What’s New

Administration Roles

OvalEdge Release 6.1 brings significant changes to the Administration roles, improving overall efficiency and reducing reliance on the super admin for all tasks. In the previous version, the super admin user had complete control over all tasks in OE, while the crawler admin was responsible for crawling data sources. However, this setup caused delays in task approval and execution. With the new upgrade, every connector can now have its own set of admins (Author Users) configured, leading to a more streamlined process.

At the connector level, the following administrators are configured:

Crawler Administrator:

  • Responsible for adding a connection.
  • Nominates Integration and Security & Governance Administrator during connection establishment.

Integration Administrator:

  • Manages the configuration settings for the profile and crawler settings of the connector.
  • Has the authority to delete the connector, schema, or data objects associated with it.

Security and Governance Administrator: 

On a connector level, this admin has the following privileges:

  • Sets permissions for roles on all data objects associated with the specific connector.
  • Updates governance roles on data objects linked to the connector.
  • Creates custom fields on all data objects associated with the connector.
  • Manages the creation of Service Request templates for the connector.
  • Has the authority to create and manage domains.
  • Configures categories, sub-categories, and classifications for each domain.
  • Assigns permissions to users or roles authorized to access the domain and its associated terms.

These enhancements provide a more granular level of control and empower admins to handle specific tasks efficiently within their designated connectors. By distributing responsibilities among different administrators, OvalEdge Release 6.1 ensures smoother task execution and faster approvals, contributing to an enhanced data management experience.

For more information on the New Administration Roles, please refer to the OvalEdge New Administrative Capabilities document.

New Licensing Model

In OvalEdge Release 6.1, we are excited to introduce new licenses that offer enhanced access to data objects, catering to different user needs. The three licenses available are "Viewer" and "Author," each designed to provide varying levels of privileges for effective data management.

Viewer License:

  • Metadata: Provides read access to the metadata of data objects.

  • Data: Can be granted Data No Access or Data Preview.

  • Ideal for users who need to view data objects without making any modifications.

Author License:

  • Offers the highest level of privileges for managing data objects that can be assigned to Governance stakeholders such as Owners, Stewards, and Custodians.

  • Author users can actively participate in approval workflows, ensuring seamless data governance.

  • They have permission to view and modify both metadata and data within data objects.

These new licenses provide a more granular and tailored approach to data access, allowing users to efficiently manage data according to their specific roles and responsibilities.

For more information on the New Licensing Model, please refer to the Release6.1 License Types document.

Improved Data Quality Rules (DQR)

The latest updates to the Data Quality Rules (DQR) module are aimed at improving existing functionalities and introducing new features for an enhanced data management experience.

Add Rule Pop-up in Data Quality Rules:

  • The rule-creation process has been enhanced for greater intuitiveness, allowing users to access various options such as scheduling, notifications, and success percentage, thereby streamlining the rule-setup process.

Data Quality Objects:

  • Predefined Data Quality Rules are now available for different data objects, including tables, table columns, and reports.

  • Users can schedule specific DQRs, view executed reports, and enable/disable rules individually or collectively.

  • Notifications can be enabled, and service requests can be automated through dedicated radio buttons in case of a DQR failure.

Control Center:

  • A new Control Center has been introduced to provide better monitoring and management of executed data quality rules and their execution status.

Data Anomaly Detection:

  • The new feature enables the detection of anomalies in data assets, notifying users of potential data quality issues.

Data Quality Index (DQI):

  • OvalEdge now offers better insights to users about a DQR that was executed, with new parameters such as DQR score, Service Request Score (SR Score), and Child Score.

    • DQR Score is calculated based on the outcome of the DQ Rules executed on the specific data object.

    • SR Score is calculated based on the weighting of open service requests associated with the data object.

    • Child Score takes into consideration the weightage of child objects, such as columns being children of a table.

These updates significantly enhance the DQR feature capabilities, providing users with robust data governance tools to maintain data quality and consistency across their organization. 

For more information on the Improved Data Quality Rules (DQR), please refer to the following documents.

Enhanced Service Request Template

OvalEdge Release 6.1 brings exciting updates to the Service Desk Template, Fulfillment Mode, and Approval Workflows, providing users with more flexibility and customization options for their service requests.

Service Desk Template, Fulfillment Mode:

  • Users can now choose between Automated and Manual Fulfillment modes, tailoring the request processing to their preferences.

  • In Manual Mode, the final approver has the option to fulfill the request after the approval process is completed, allowing for more customized handling.

  • In Automated Mode, the service request is fulfilled automatically after approval, presenting the user with a list of automatic fulfillment options to choose from.

Customize Template Fields - Custom Options:

  • When adding fields to a template, users can configure various settings using JSON format for each stage (pre-creation, creation, fulfillment, approval, and final approval).

  • Visibility settings determine when and where the field should be displayed in the template.

  • Editability settings control whether users can edit the field details at different stages.

  • Mandatory status settings determine if a field is essential for raising a service request.

Support for Multiple External Integration Systems:

  • The service request template now supports multiple external integrations such as Jira, ServiceNow, and Azure DevOps, offering flexibility to switch between different external tools from a single template.

  • Each integration can be easily configured and managed through a user-friendly interface with "active" or "inactive" status options.

  • Template fields can be mapped to corresponding external service tools.

Approval Workflows:

  • The Approval Workflow now allows external tools (e.g., Jira, ServiceNow, Azure DevOps) as approvers for service requests, expanding the options for efficient approval processes.

  • The SLA (Service Level Agreement) checkbox triggers advanced notifications to approvers, ensuring timely processing and improved accountability.

  • Users can set specific timeframes for the approval process, and approvers must meet designated response times to prevent delays or overlooked requests.

These enhancements provide users with powerful tools to customize service request handling, streamline approval workflows, and seamlessly integrate external systems for a more efficient and productive experience. 

For more information on the Enhanced Service Request Template, please refer to the following documents.

Customizable Notification Templates

OvalEdge Release 6.1 introduces the all-new Notification Templatization feature! With this exciting addition, admins have unprecedented control over what notifications users receive.

With our user-friendly interface, admin users can effortlessly tailor notification messages by simply dragging and dropping variables that correspond to specific events or features. The process is intuitive and streamlined, allowing admins to create personalized messages that resonate with users.

The Notification Templatization feature supports three different mediums for notifications: Inbox, Email, and Slack. Each medium has its own dedicated tab, making it convenient for admins to customize messages for different communication channels.

Get rid of generic and standardized notifications! OvalEdge Release 6.1 puts customization power in your hands, ensuring that your users receive relevant and engaging messages through their preferred channels. Upgrade today and unlock the full potential of personalized notifications with our innovative Notification Templatization feature.

Navigation: OvalEdge Application > Administration > System Settings > Notifications Templates.

Deep Analysis Tool

The Deep Analysis Tool in OvalEdge provides invaluable insights by identifying impacted data objects across different data systems when a business transaction occurs in one or more enterprise applications. This analysis is particularly useful for understanding the flow of information specific to a business transaction or use case, leveraging profiling metrics of data objects captured in a controlled setting.

Follow these steps to identify transaction-related data objects:

  • Restrict Concerned Applications:
    Ensure that the concerned applications are restricted for generic use, allowing focused analysis.
  • Baseline Profiling:
    Perform profiling of the connectors of the data systems for the applications. This establishes the baseline metrics before the business transaction.
  • Perform Business Interaction:
    A user performs the desired business interaction on the enterprise application(s).
  • After Transaction Profiling:
    Conduct profiling of the connectors of the data systems again. This generates profiling metrics after the completion of the business transaction.
  • Comparison and Analysis:
    Compare the before and after profiling metric sets. Any data objects whose profiling metrics have changed are potential candidates related to the business transaction.

The information obtained through the Deep Analysis Tool is invaluable for understanding the impact of business transactions in complex enterprise environments with multiple integrated applications.

Note: Prior to utilizing the Deep Analysis feature, it is essential for the schema under consideration to undergo profiling at least once. This prerequisite ensures the availability of necessary profiling data, facilitating accurate comparison of metrics and yielding reliable and meaningful analysis results.

Configuration: OvalEdge Application > Administration > System Settings > Others > Search for “enable. deep analysis” key > Set the value to “true” 

For more information on the Deep Analysis Tool use case, please refer to the Deep Analysis Tool Use Case document.

Connector Health

Connector Health in OvalEdge refers to the status based on the overall performance of a connector within an application or data integration platform. This assessment takes into account various factors, including the success rate of data transfers, response times, error rates, and the connector's ability to handle large data volumes efficiently. Monitoring connector health is vital for maintaining smooth data pipelines and workflows.

Key Benefits of Monitoring Connector Health:

  1. Issue Identification and Resolution: Monitoring Connector Health helps identify and address any issues or bottlenecks that could potentially impact data flow and data integrity. Early detection allows for timely resolution and minimizes disruptions.
  2. Proactive Management and Optimization: Keeping an eye on connector health enables proactive management and optimization of data integration processes. By identifying areas for improvement, teams can fine-tune the connectors to achieve optimal performance.
  3. Reliable Data Transfer: Ensuring the health of connectors guarantees reliable and accurate data transfer between systems. This is critical for maintaining data consistency and data quality throughout the integration process.

In the OvalEdge application, the Connectors Health feature displays a clear indicator of the connection status for each connector. A green icon indicates an active connection, while a red icon indicates an inactive connection. This visual representation provides users with a quick and easy way to monitor the health of their connectors, facilitating efficient data management and data integration within the platform.

For more information on Connector Health, please refer to the Connector Health document.

Enhanced Global Search

The enhanced Global Search feature is designed to provide a more efficient and personalized search experience for users.

  • Global Bookmarks for Easy Environment Switching:
    Users now have access to global bookmarks at the top header of the application platform. These bookmarks offer quick access to different environments and cannot be modified, ensuring seamless switching between important environments.
  • Search by Classifications:
    Users can now search for terms using classifications. Simply entering the name of a classification in the search box will yield all the terms classified under that category. This streamlined approach makes it easier for users to find specific terms, especially when dealing with multiple terms classified under different categories.
  • Enhanced Customization Settings for Search Results:
    OvalEdge now offers improved customization settings for search result calculations. Users can tailor the display of relevant search results to match their specific requirements. This includes the ability to adjust settings for including or excluding synonym and popularity scores, enabling users to fine-tune search results according to their preferences.
    • If both Synonym and Popularity Scores are enabled: In this case, when the configuration settings for both ‘globalsearch.score.use.synonym’ and ‘globalsearch.score.use.popularity’ is set to true, the search results are ranked using the formula (Elasticsearch Score + Synonym Score) multiplied by Popularity Score. This ensures that search results consider Elasticsearch Score, Synonym Score, and Popularity Score, resulting in more accurate and relevant rankings.
    • If only Synonym Score is enabled (Popularity Score is disabled): When the configuration setting ‘globalsearch.score.use.synonym’ is set to ‘true’ and ‘globalsearch.score.use.popularity’ is set to ‘false’, the search results are scored by combining Elasticsearch Score and Synonym Score. The popularity Score is excluded from the calculation. This configuration allows for a focus on synonym matches while disregarding the popularity of search results.
    • If only Popularity Score is enabled (Synonym Score is disabled): If ‘globalsearch.score.use.synonym’ is set to false and ‘globalsearch.score.use.popularity’ is set to true, the search results are ranked based on Elasticsearch Score multiplied by Popularity Score. Synonym Score is not considered in this case. This configuration emphasizes the popularity of search results, ensuring that more frequently accessed or relevant items are prioritized.
    • If both Synonym and Popularity Scores are disabled: When both ‘globalsearch.score.use.synonym’ and ‘globalsearch.score.use.popularity’ is set to false, the search results are solely based on the Elasticsearch Score. Synonym Score and Popularity Score are not taken into account. This configuration simplifies the scoring system, focusing solely on the relevance determined by the Elasticsearch Score.
  • Advanced Filters:
    The existing Advanced Filters in the Global Search have been enhanced to include additional conditions, providing users with more refined search options for precise results.
    • Equal to condition
      The "Equal to" condition in the Advanced Search data filter is used to specify that the search results should include only those items that match the exact value provided.
      Example: Let's say we have a dataset of employees with a column named "Department." We want to find all employees in the Sales department. To do this, we can use the Advanced Search data filter and set the condition as follows: Field: Department Operator: Equal to Value: Sales By applying this filter, the search results will display only those employees whose Department value is exactly "Sales." This allows us to narrow down the search and retrieve specific data matching the given condition.
    • Starts with condition
      The "Starts With" condition in the Advanced Search data filter retrieves items with a value beginning with a specific set of characters.
      Example: Let's assume we have a dataset of customers with a column named "Name." We want to find all customers whose names start with the letter "A". To accomplish this, we can set up the Advanced Search data filter as follows: Field: Name Operator: Starts With Value: A When we apply this filter, the search results will include only those customers whose names start with the letter "A". This condition allows us to retrieve data that matches the specified starting characters, providing a more targeted search.
    • End with condition
      The "Ends With" condition in the Advanced Search data filter retrieves items with a value ending with a specific set of characters.
      Example: Suppose we have a dataset of products with a column named "SKU" (Stock Keeping Unit). We want to find all products whose SKU ends with the number "123". To achieve this, we can configure the Advanced Search data filter as follows: Field: SKU Operator: Ends With Value: 123 Upon applying this filter, the search results will include only those products whose SKUs end with the number "123". The condition allows us to retrieve data that matches the specified ending characters, enabling us to refine the search based on specific criteria.

With these enhancements, the Global Search feature in OvalEdge is now more powerful and user-friendly, allowing users to find the information they need quickly and accurately. 

For more information on Configuring Global Search Results, please refer to the Configuring Global Search Results Calculations document.

New Configurations

The latest release introduces a set of new configurations that provide users with even greater control over the application's behavior. The newly added configurations are listed in the following Table.

Configurations

Descriptions

tags.children.pagination.row.size

Set the number of child tags to be displayed on the respective Tag summary page.

Parameters:

The default value is set to 100.

Enter the value in the field provided.

health.pagination.row.limit

Set the pagination limit for displaying the number of health history records in the Connector Heath pop-up.

Parameters:

The default value is set to 20.

Enter the value in the field provided.

dataquality.catalogobjects.additionalcolumns

Configure the weightage of the Data Quality Rule score to calculate the data quality score on data objects.

Note: Data Quality Score is the weighted average of three parameters: Data Quality Rule Score, Service Request Score, and Child Score. The total weightage of all three parameters should add up to 100%.

Parameters:

The default value is set to 50.

Enter the ratio percentage in the field provided.

dq.dashboard.srscore.weightage

Configure the weightage for the service request score to calculate the data quality score on data objects.

Note: Data Quality Score is the weighted average of three parameters: Data Quality Rule Score, Service Request Score, and Child Score. The total weightage of all three parameters should add up to 100%.

Parameters:

The default value is set to 25.

Enter the ratio percentage in the field provided.


dq.dashboard.childscore.weightage

Configure the weightage of the service request score to calculate the data quality score

Note: Data Quality Score is the weighted average of three parameters: Data Quality Rule Score, Service Request Score, and Child Score. The total weightage of all three parameters should add up to 100%.

Parameters:

The default value is set to 25.

Enter the ratio percentage in the field provided.

anamoly.detection.analysis.data.iqr.range

Set the minimum and maximum range for data changes to identify any anomalous data objects. Any data objects that lie outside of this range will be considered abnormal or anomalies.

Parameters:

The default value is set to 10-50.

Enter the value in the field provided to configure the new maximum limit.

anomaly.detection.analysis.deviation.threshold

Set the threshold percent above or below which anomaly will be generated for the rate of change in the data series. Any data objects that lie outside of this threshold will be considered abnormal or anomalies.

Parameters:

The default value is set to 50.

Enter the value in the field provided to configure the new maximum limit."

oe.global search.searchorder

Control the display of tabs in the Global Search by enabling or disabling them. This applies to both the tabs across the top of the page and the filter left panel.

Parameters:

Enter the tab names in the field provided.

  • All-ALL
  • Databases - schema
  • Tables - oetable
  • Tags - oetag
  • Table Columns - one column
  • Files - oefile
  • File Columns - oefilecolumn
  • Reports - oechart
  • Report Columns - chartchild
  • Codes - oequery
  • Business Glossary - glossary
  • Data Stories - oestory
  • Projects - project
  • Service Requests - servicedesk

homepage.welcomestory.fullsize

Configure how the Welcome story should be displayed on the Home Page by default.

Parameters:

The default value is False.

  • If set to True, the welcome story will be displayed to its maximum size.
  • If set to False, the welcome story will be displayed within a limited size with 'See More' and 'See Less' options to expand/collapse the content.

azuredevops.spoc.message

Define a standard message in the azure devOps > stores > description field.

Parameters:

Enter the message to be displayed in the Description Field.

dataquality.associatedobjects.max.limit

Set the maximum number of data objects that can be associated with a DQ Rule.

Parameters:

The default value is set to 1000.

dataquality.controlcenter.failedrecords.max.limit

Configure the maximum number of failed results of the DQR (Data Quality Rule) to be displayed in the Control Center Tab.

Parameters:

The default value is set to 50.

Enter the value in the field provided to configure the new maximum limit.


glossary.associations.view

Configure to display either Published Terms or Terms in both Published & Draft status in the drop-down options when associating terms to data objects.

Parameters:

The default value is set to False.

  • If set to True, only published terms will be displayed in the drop-down when adding terms to data objects.
  • If set to False, terms in both published and draft status are displayed in the drop-down when adding terms to data objects.

oe.user&role.admin

Assign the 'User and Role Creator' privileges to Roles.

Parameters:

The default value is OE_ADMIN.

Click on the field to select any role from the drop-down.

project.task.assignee.change

Configure to add a role as a task assignee who can reassign tasks to project members.

Parameters:

The default value is set to PROJECT_MEMBERS.

Click on the field to select any role from the options PROJECT_OWNER, CURRENT_ASSIGNEE, and PROJECT_MEMBERS.


project.task.object.assignee.permissions

To check the access permissions a user has on data objects before assigning a task to any member.

Parameters:

The default value is empty.

Click on the parameter field to check access permissions on a data object. The available access permissions are as follows: META_READ, META_WRITE, DATA_PREVIEW, DATA_READ, DATA_WRITE, GOVERNANCE_ROLE, and ADMIN.

project.task.object.visiblity.with.security

To control the visibility of tasks in the board and list view of a project based on security permission settings

Parameters:

The default value is set to false.

  • If set to True, the security permissions are enabled and the users cannot view the tasks in the projects if they do not have authorized permissions.
  • If set to false, the security permissions are disabled and the users can view the tasks in the project.

role.project.admin

Assign 'Project admin' privileges to Roles.

Parameters:

The default value is OE_ADMIN.

Click on the field to select a role you want to assign as Project Admin.

show.home.graphicalcount

To show/hide the display of graphical presentations such as Tables, Files, and Reports on the home page.

Parameters:

  • If set to true, the graphs will be displayed.
  • If set to false, the bar graph of objects will not be displayed.

oe.diagnostics.slowapi.performance.threshold

To set a threshold value to determine the maximum duration an API can take to load. The APIs exceeding the specified duration will be displayed in the API Performance tab.

Parameters:

Enter the value in the field provided (in milliseconds).

no.of.samplefiles.bucketanalysis

To set the maximum number of sample file name logs to be displayed in the folder during the Bucket Analysis process.

Parameters:

Enter the desired value in the field provided.

enable.folder.analysis

To enable or disable the Folder Analysis tab from the File Manager and the File Catalog pages

Parameters:

  • If set to True, the Folder Analysis tab gets displayed in the File Manager and the File Catalog pages.
  • If set to False, the Folder Analysis tab will not be shown in the File Manager and the File Catalog pages.

globalsearch.fulltext.search

To search for full-text information with or without highlights based on the configured character size.

Parameters:

  • If set to True, the full-text search will be performed without highlights.
  • If set to False, the search will consider a character length of 50,000. It will perform the search using highlights for the initial portion of the text and display the search results with highlights. The remaining portion of the text, it will display the search results without highlights.

globalsearch.max.analyzed.offset

To configure the maximum limit for the preferred value for highlights. Adjusting this value allows users to control the extent of text that can be visually emphasized for easier identification or reference.

Parameters:

The default setting is 200,000 characters.

Enter the desired value in the field provided.


globalsearch.score.use.synonym

Configure the Synonym (Configure Search Keyword) Score in the Relevance score formula to determine the most relevant search results. The relevance score is calculated based on three components: the Elasticsearch score, the popularity score, and the synonym score (if configured).

Parameters:

  • If set to True, the search results calculation includes the Synonym score.
  • If set to False, the search results calculation excludes the Synonym score. Relevance score calculation depends solely on the Elasticsearch score and the settings configured for the Popularity score.

globalsearch.score.use.popularity

Configure the Popularity Score in the Relevance score formula to determine the most relevant search results. The relevance score is calculated based on three components: the Elasticsearch score, the popularity score, and the synonym score (if configured).

Parameters:

  • If set to True, the search results calculation includes the popularity score.
  • If set to False, the search results calculation excludes the popularity score. The relevance score calculation depends solely on the Elasticsearch score and the settings configured for synonym score.

Advanced Jobs

The latest release introduces a set of new advanced jobs that provide users with even greater control over the feature's behavior. The newly added advanced jobs are listed in the following Table.

Advanced Job

Description

Assign tag to table or column

This job will assign the tags assigned to the column to the table (or) the table tags to columns based on the value given in the attribute.

Attributes:

Enter '1' to assign table tags to columns (or) '2' to assign column tags to the table. Enter value 1 or 2.

SSIS Load Folders Via Bridge

The purpose of this job is to establish a new SSIS connection and to load the projects and packages available in the SSIS connection to the OvalEdge application via Bridge.

Attributes:

SSIS Connection Name/ID

SSIS Folder Path of Bridge

License Type: “True” for auto-build lineage “False” for the standard license 

Bridge ID

MIGRATE_USER_AND_CONNECTOR_LICENSE_TO_61

This job updates the User License Type based on the current Permissions from previous versions to 6.1.

Attributes: No Attributes are required for this job to run.

Process relationname type 

The purpose of this advanced job is to enable users to explore relation types, such as synonyms, relates to, consists of, etc., between business terms and other data objects or terms. It offers valuable information, including the glossary ID, related object ID, relation type ID, and object type.

Attributes: Not required.

Download Metadata to server 

This advanced job facilitates downloading metadata for data objects, users and roles, or business glossary terms into a .csv file. The downloaded details will be saved to the specified path provided in the attributes.

Attributes:

  • Type: Enter the object type (e.g., Glossary)
  • Path: Specify the download path for the file.

Alert file check validation

his advanced job monitors the addition of any file within a folder stored in the NFS connection. It aims to ensure that files are regularly added to the designated folder. If no files are added within the specified time interval, which can be set, for example, to 24 hours, the job triggers an alert. The alert is sent to users via the system alerts inbox to notify them of the absence of file additions.

Attributes:

  • NFS Connection ID: Specify the ID of the NFS connection where the folder is located.
  • File Name with Extension: Provide the name of the file (including the extension) to be monitored.

Build lineage from python json config files

This job builds Lineage from Python JSON config files

Attributes:

  • Root file path: Specify the root folder path where the Python JSON configuration files are located.
  • Source ID connection: Enter the crawler connector id for the data source involved in the lineage.
  • Destination ID connection: Enter the crawler connector id for the data destination involved in the lineage.

DbtFileParseAndUpdateDescription

This job is designed to automate the process of extracting relevant data from the DBT file and synchronize it with the corresponding descriptions in OvalEdge.

Attributes:

  • Source ID connection: Enter the crawler connector id for the data source involved in the lineage.
  • Destination ID connection: Enter the crawler connector id for the data destination involved in the lineage.

LineageBuildingFromJSONFiles

This job is designed to build lineage by reading JSON files.

Attributes:

  • JSON File Path Folder: Specify the folder path where the JSON files containing lineage information are located. The job will scan and process the JSON files within this folder to build the lineage.

Extract Queries from the source to build lineage

This job is designed to extract unique queries from a data source in order to build lineage. It aims to remove duplicate queries from the query logs and focus on unique queries for lineage analysis.

Attributes:

  • Enter the connection ID of a connector
  • Enter the File name with path which contains the query
  • Enter the query name and query contain columns with comma separator
  • Enter the server type that was used to parse the queries
  • Enter the chunk size to process the queries in batches

Download Metadata to Server

The purpose of the advanced job is to assist users in downloading a "template with data" for multiple configured objects in the specified attribute. The job allows users to download data related to various object types such as roles, users, business glossary terms, data objects, etc., and save it as a .csv file.

Attributes:

  • Object Type: Users can enter the name of the object type they want to download the template for. They can specify multiple object types by separating them with commas. For example, if they want to download templates for roles and users, they can enter "roles,user" in this attribute.
  • File Download Path: Users can enter the path where they want to download the template. This specifies the location on their system where the .csv file will be saved.

Build Lineage from extracted Queries

This advanced job is designed to build lineage using the queries fetched from the previous advanced job, "Extract Queries from the source to build lineage".

Attributes:

  • Vertica connection info ID:  Enter the ID or details of the Vertica connection 
    Enter the Path which generated from advancejob : Extract queries from source to build lineage - Provide the path that was generated as output from the previous advanced job, "Extract Queries from the source to build lineage". This path should point to the location of the query file generated by the extraction job.

 Lineage for Metrics 

The purpose of this job is to analyze the metrics used in a Qlik Sense report and establish their lineage. By examining the formulas associated with each metric, the job will trace the data sources, transformations, and calculations involved in the creation of those metrics.

Attributes:

Qlik Sense Connection Id: Specify the ID of the Qlik Sense report for which the lineage needs to be built.

External Tickets Syncing 

The purpose of this advanced job is to synchronize the status of tickets in an external ticketing tool, such as Jira, ServiceNow, or DevOps. The job facilitates changing the status of the tickets to either "Approved" or "Rejected" based on their current status.

  • Unapproved tickets in the external ticketing tool will be directly changed to the "Rejected" status.
  • Approved tickets  in the external ticketing tool will be moved to the "Approved" status.

Improvements

Tags

  • In Tags | Child Tags | Previously there was no 'X' symbol available to remove child tags. To enhance functionality, we have now introduced an 'X' symbol for child tags. Clicking on the 'X' symbol will trigger a warning popup message, asking you to confirm the removal of the tag. You can choose either 'Yes' or 'No' to proceed with or cancel the removal, respectively.

Data Catalog

  • The Data Catalog list view has been upgraded to provide more options for interacting with data objects. 

    • Quick View
    • Copy to Clipboard
    • Open in New Window
  • If the user navigates to any data object and hovers over the term associated with it, The business description of the term will be displayed.
  • The Data Catalog now allows users to configure the visibility of two additional field columns in the tabular plug-in: 'Empty Count' and 'Zero Count.' These columns can be easily checked or unchecked to enable or disable in the tabular plug-in for Table Columns and File Columns.
  • Nine Dots Menu | Configure Search Keywords | In the previous version only admin users had the privilege to add or remove keywords. With the latest update, business users are now granted the ability to add or delete keywords added to them. However, admin users still retain the exclusive authority to modify or delete any keywords added by themselves or other business users.
  • If users select a data object such as a Database, Table, or Table Column in bulk, and then click on the 9-dots menu and select "Remove Terms" the Remove Term pop-up is used to display all terms available in the application. Now, when a user bulk removes terms, only the terms that are actually assigned to the selected objects will be displayed in the Remove Term pop-up, rather than showing all the terms in a list.
  • Tabular List View | A Cart icon is displayed in the Project or Access Cart which is gray when no data objects are selected and highlighted in blue once data objects have been added to the access cart or default project.
  • Metaread users were unable to Configure the Search Keywords on data objects. Now, any user can configure the Search Keywords on data objects.
  • The configured Code Custom Fields options are now displayed in a small pop-up window. This pop-up window also shows the values that have already been selected, making it easier for users to keep track of their selections.
  • In the Lineage | Tabular View | The Associated Objects column filter has been enhanced to support filtering of ''View Columns'.
  • In the Lineage | The Lineage screen now displays the full name of the source or destination objects in the tooltip, without any character limit. Previously, there was a limit of 30 characters, but now it has been updated to allow unlimited length for object names in the tooltip.
  • Tables
    • Previously, the Nine-dots menu option did not exhibit the "Update Governance Roles" section for a table data object. However, it has been enhanced now with the "Update Governance Roles" option and obtainable within the Nine-dots menu for easy access and managing the governance roles for a table.
    • The sort icon was displaying the list in descending order first, and then ascending order. Now it is improvised to display the list in ascending order and then in descending order.
  • Reports
    • Summary Page | Top Users, for the Tableau reports instead of providing the overall count for individual users, the number of top users were categorized based on their device logins.
    • Now breadcrumbs in the top show the hierarchical navigation path of the Report and highlight the current Report objects in blue. It helps users understand where they are within the report structure.
  • Report Columns
    • A new section has been incorporated to display the “Last Meta Sync Date” to the users.
  • Codes
    • The user was able to add both catalog and uncatalog objects to any active DQR, Now the application prevents the user from adding any uncatalog objects to an active DQR.

Business Glossary

  • Business users with Meta-Read permissions can now view only basic term details. The “View Details” button is disabled for users with Meta-Read permissions, preventing them from accessing more detailed information.
  • Suggest a Term | In the service request template a “Detail Description” field has been added to the template for the selected Domain.
  • Term detailed page | The Nine Dots options now include a “Change Domain” option to change the domain of the term by selecting the desired domain from a list of available options.
  • The term detailed page includes an "Add Objects" button that enables the bulk addition of selected data objects.
  • When a new term is suggested using the “Suggest a New Term option a service request is raised and once the request is approved by the final approver, the user can either choose to publish the term or leave it in draft status.
  • In the service request template, the Domain description is now displayed in the tooltip when hovered over the domain name.

Data Stories

  • The newly added report using the report icon is now displayed as a hyperlink. So, If the user clicks on the report name the application will redirect the user to that specific report. 

Dashboard

  • A new view, "Flow view" is available to visualize data lineage. This view displays the data flow as a flowchart between databases or data sources, providing an overview of the entire data flow, including dependencies and relationships. Moreover, the Flow View features a "Reset" button located in the bottom right corner that allows users to revert the graph to its initial representation, with the ability to zoom in or out using the “+” and “- icons.
  • The Refresh button is now enhanced with a tooltip to display the functionality of the refresh button.

Projects

  • The 'Transition functionality' to configure different project statuses flow has been disabled on the Projects page. However, it is now possible to configure it from the backend code for clients who require this feature. Once configured from the backend, the Transitions toggle becomes enabled for each project. It is then up to the user's discretion to enable it for a specific project.
  • In the Projects | List View | Selecting the Business Glossary tab and downloading it would display term-related fields in the Excel file instead of project details. Now, the Project-related column fields are also displayed when the projects related to Business Glossary are downloaded.

Service Desk

  • The object was not appearing in the search results when searching in Service Desk with the full name, despite its availability in the application. However, the Application now displays accurate search results.
  • After approval of a term creation request when a term is created in the application, the steward of the term will be notified regarding the same. If the request is raised by a team, then the whole team will be notified along with the steward.
  • The application will notify the users of any invalid characters added to the fields while raising a service request. This notification will be at the field level.
  • The SLA checkbox provides a way to set specific timeframes for the approval process. It requires approvers to meet certain standards and respond to requests within a designated timeframe and prevent delays and reduce the risk of requests being overlooked or forgotten.
  • For example: if the SLA for a particular approval workflow is set at 24 hours, the approvers assigned to that workflow must approve or reject the request within that time frame. If they fail to do so, an advanced job in the background triggers a reminder notification to the approver every half an hour until approval is done.
  • The Approval Workflow can now be integrated with external ticketing systems such as Jira, Service Now, and Axure Devops. To use this feature users must select the external ticketing system in the 'Approver' field from the drop-down menu while setting up the approval workflow and also provide the “Approved Status” or “Rejected Status” while integrating the service request with external ticketing systems. The service request can be approved or rejected when it matches the Approved/Rejected status.
  • Service Requests can now be grouped based on connection type and approvers in the workflow, thanks to the group feature. Additionally, both system and custom templates now support this functionality.
  • Previously, only users, teams, and roles from the organization were able to approve service requests. However, with the latest update to the Approval Workflow, external tools can now be added as approvers for service requests. Users can select the external tool from a dropdown list of options that are configured to the service request template. If multiple levels of approvers are configured, external tool approvers can be set for different levels.

Governance Catalog

  • Certification Policy
    • Previously, the "Type" column in the certification options displayed all options, including Violation, Caution, Inactive, and None, instead of just the "Certify" option. With the new update only "Certify" will be displayed in the "Type" column.
    • Additionally, when executing a Policy, the job logs weren’t displaying complete information for the trigger object and now the job logs will display the trigger object's complete information.
  • Governed Data Query
    • The viewing of masked table column data in the GDQ results has undergone an enhancement. Previously, users were able to view the masked table column data, but the feature has been updated to prevent this.
  • Users List
    • A new column “Status” has been added to the tabular list view to display whether the user is Active or Inactive.
  • Data Quality
    • In Data Quality | Add Rules now include new fields such as Function, Dimension, Associated objects, Steward, Tags, Scheduling, execution instructions, Violation message, corrective action, alert on failure, and service request on failure. There are new incorporations to this module such as: 
    • Data Quality Objects suggest automated Rules based on OvalEdge data objects. Users can enable, execute, and view reports on these rules, as well as schedule them to run at a specified time. They can also set up alerts and create service requests on failure.
    • The Control Center displays a list of all data quality rules that failed after execution. It provides additional information such as the Connection name, Schema name, and Table name associated with each DQR, along with other self-explanatory columns.
    • The Data Anomalies page lists all data objects where unusual deviations in metadata are detected, such as an increase or decrease in the count of tables or rows. Users can set a threshold percentage for this anomaly, and if the value exceeds the threshold, the corresponding data objects and their details will be displayed on this page.
  • Data Quality Rules 
    • Select any rule that will navigate to the Summary page. On the summary page, the "Control Center" toggle button has been added that allows the failed results of the DQR Rule to be included in the control center.
    • While establishing a DQR users can add the desired data objects to the rule by clicking on the newly added list view.
    • To edit the name of an existing Data Quality Report (DQR), a pop-up was previously displayed. With the new update, users can edit the name by simply clicking on the edit icon, making it an inline edit.
    • When a user enables "Caution Downstream" in a Data Quality Report (DQR), they will receive an alert if any of the cataloged objects fail during the execution of a DQR. The user will see a caution message across all downstream objects.
  • Data Quality Objects
    • Based on the objects selected if the rule type is automated, multiple rules associated with that data object can be executed as a single job.
    • The data quality objects tab displays the automated rules relevant to the selected object and its type. With this enhancement, users have the ability to modify, activate, or deactivate rules. Additionally, all active rules can be executed simultaneously, and the reports generated from these executions can be viewed in the Data Quality Reports. The activation or deactivation of rules can also be executed through the Nine Dots menu.
    • The functionality of deactivating a rule while it is being executed has been modified. Previously, users were able to do so, but now, they cannot deactivate a rule while it is in the process of being executed.
  • Control Center
    • The Remediation Assignee column is now available with an edit icon which allows users to change the Remediation Assignee for the associated Data Quality Rule.
    • Users can now navigate to the Data object summary page by clicking on the Data Object name on the Control Center page.
    • Now users can change the current Status of the Data Quality Rule using the edit icon.
    • When the user clicks on the icon below the "Remediation Plan" column, a pop-up displaying the "Remediation SQL" information will appear.
  • Data Anomalies
    • The user has the ability to update the status of an anomaly by clicking on the edit icon that will appear when the user hovers over the status. The user can also view the status change history within the same popup window as they make updates
    • If the "Run Anomaly Detection" checkbox is selected for any specific data object, then the anomaly detection process will run as a separate job once the profiling is finished. The user can then access all logs related to the anomaly detection process for that specific schema.

My Resources

  • My Profile
    • Profile Picture now supports avatar images, allowing users to personalize their profiles with a custom image.
  • Inbox
    • A new tab "Project Alerts" has been added to easily view all the notifications related to projects and stay informed about any updates or changes.
  • My Permissions
    • A download icon is enabled in all the tabs to allow users to download the details.
    • Table Column, File Column, and Report Column tabs are now available for each governance role. It allows users to view the objects for which they are assigned either as a steward/owner/custodian or other governance roles.
  • My Watchlist
    • Reports tab | Metadata Changes option, users were not getting notified if the Reports are changed from “Active to “Inactive status or vice versa.
    • In Reports the significant Data changes option has been removed.

File Manager

  • The “Upload File” option in the Nine Dots menu has been renamed to “Upload File/Folder”

Query Sheet

  • While in the Auto SQL tab, the user is presented with the option to drag the bar towards the right to view additional information.
  • Previously, in column references, all the previous query history was displayed instead of only the saved queries. However, this has been improved to only show the saved queries in column references

Jobs

  • When downloading job logs, a spinner icon will appear to indicate that the system is in the process of downloading the logs.
  • The application underwent remodeling to accurately display user status in the job status. Previously, the job status did not provide an accurate description of whether a user was deleted or deactivated when either of these actions was performed in the "Users & Roles" section of the "Administration" tab. However, the application now displays an accurate description of the user status.
  • The “Job Step Name” column field in the tabular list view now displays the Connection Name when the connector is deleted from the application. Previously, after deleting a schema or connection, the Job Step Name column would only show the Connection Id, which could lead to confusion while working with multiple connections.
  • The application presents information about active user sessions and running jobs with a 10-minute refresh interval. This feature will assist the user in monitoring usage and identifying performance-related issues.
  • Whenever an AI recommendation is executed, the log displaying that specific job now displays the corresponding term ID as well.
  • Previously, The logs of Jobs that are either ongoing, completed, or started within the Start time and End time were not displayed.
    Example: If the Job start time is “03-21-2023 2:00 pm” and the end time is  “03-21-2023 3:00 pm” the jobs that are ongoing or were either completed or started between 2:00 pm to 3:00 pm will also be displayed.

Advanced Tools

  • Impact Analysis
    • Impacted Objects | users were unable to download the impacted objects for a single Source ID using the nine dots option. Now,it has been improvised for users to download the impacted objects for a single source ID.
    • The issue where users were unable to add all or a few columns as source objects has been addressed.
  • Build Auto Lineage
    • After using “Correct Query” and clicking the back button, users weren’t redirected to the same object in the tabular list view and instead are directed to a different page. Now the application is working as intended.
  • Compare Schema
    • Users were unable to add the selected objects to impact analysis after executing the schema comparison. Now it is enhanced to allow users to add the objects to impact analysis.
    • When comparing information of a specific schema, the application now displays the count of the tables added, deleted, or modified respectively to that schema.
    • The status of the renamed table was displayed as deleted in the “Action” column. Now the status displays accurate details about the action.
  • Deep Analysis Tool
    • The Advanced Tools module is now implemented with the 'Deep Analysis Tool' to display the transactional data changes in the application UI. To view this tool in UI, users must configure the key value of “enable.deepanalysis” as true in Administration | Configuration | Others.
    • Now users can filter the result in the remark column along with the schema name. The sort option is also available to sort the updated date, table name, and column name. Users can also download the deep analysis result using the download icon available at the bottom of the page.
    • After executing a deep analysis, the remark column in the table change summary will now display the count of the columns that underwent transactional data changes along with the remark.
    • For the Business Glossary template, the Excel sheet tab names have been renamed from ‘Term Relationships’ to ‘Related Objects’ and ‘Term Objects ’ to ‘Associated Data’ to match with the application’s UI terminology.
  • Load Metadata from Files
    • After adding a tag to a report/report column using the template with data, when the user tried to remove the same tag using the template, the tag name was reflected in the UI. The tag name is no longer reflected in the UI.
    • When attempting to update technical descriptions of table columns via load metadata from files, users experienced issues with the information not being reflected in Data Catalog. However, the application has been modified to ensure that the technical descriptions are accurately displayed in Data Catalog.
    • Previously, when downloading a Template or Template with Data for the Table Columns, there was a mandatory field called "Column Type." However, as an improvement, this field has now been made optional.
    • When any Template with Data for any data object is downloaded, a notification email will be sent to the respective admin user notifying about the job status and template details.
    • While uploading files via Upload File or Folder and Load Metadata from Files, if the file contains any malicious content those details will be displayed in jobs logs.

Administration

  • Connectors
    • Database Type field has been newly added with Regular and Unity Catalog options - Regular option fetches all the schemas from the remote source. On the other hand, the Unity Catalog option allows users to access tables from nested schemas in the remote source.
    • Previously, the application would encounter an error while performing a crawl/profile through Impala Kerberos Connector and Hive Kerberos connector. However, modifications have been made to the application to eliminate the error and restore its functionality as intended.
    • A new check box for "Technical Description" has been added to the crawler page.
    • Users have the capability to establish lineage for data sourced from a JSON file.
    • The Crawl option has been renamed to “Action” in the "Select Important for Crawling and Profiling" section.
    • Previously, when the user performed profiling using the Hive Kerberos Connector, the job was not being executed and the job logs were displaying an incorrect table alias. However, improvements have been made to the system and the job logs are now displaying the correct table details.
    • Users were unable to validate the connection using OKTA. Now it is improvised with a client credential flow to validate the Power BI connection.
    • The managed connection form is enhanced with a dropdown parameter called Alteryx Files Type has been added with a value of one drive that enables the upload from that one drive.
    • The "Manage Connector" pop-up for the Salesforce Connector, the "API Version" dropdown has been updated to include the latest versions of the Salesforce API in the menu
    • A tabular plug-in displays columns for various connector types, including Base Connector, Auto Lineage Connector, Data Quality Connector, Data Access Connector, Integration Admin, and Permission.
    • The system was unable to parse a few views resulting in the failure in lineage building.
    • Manage Connection window, the SQL Server Windows Authentication and Informatica connectors have an OvalEdge Environment dropdown menu. The OvalEdge Environment dropdown menu is used to select the operating system on which the OvalEdge application is installed, such as Windows, Linux, or Unix.
    • WebFocus Connector now supports Delta Crawl for fetching only newly added and updated data objects from the remote system, reducing the amount of data transferred and improving performance.
    • For the Qlik Sense connector the Alias Host Name is newly added to the managed connection pop-up window to enter the alias hostname for establishing a connection with Qlik Sense.
    • The Connectors module now includes a “Connector Health” column that allows users to easily identify and resolve any validation issues. The column displays a round icon in either red or green, indicating whether the connection has been successfully validated at the scheduled time or not. Users can click on these icons to view a pop-up window that displays complete information about the executed job, including the time and details, as well as the mode (Manual or Auto).
    • The latest update to the Salesforce connector, which includes two important features. First, JWT authentication is enabled, allowing users to securely authenticate with Salesforce using a JSON Web Token. Secondly, File Path validation, ensuring that files uploaded to Salesforce are validated for security and compliance purposes.
    • Earlier, the application was unable to retrieve descriptions for Functions, Procedures, and Views during crawling. However, the functionality has been enhanced and the descriptions are now being fetched accurately.
    • An implementation has been made to integrate AWS Secret Manager connector for Power BI and IICS (Informatica Intelligent Cloud Services). This integration allows users to seamlessly connect and access AWS Secret Manager from both Power BI and IICS, providing enhanced security and centralized management of secrets and credentials. 
    • A signature request for a new implementation, aiming to synchronize Databricks certifications for tables and import those certifications to OvalEdge. This request has been successfully implemented.
    • To support the Azure Key Vault, additional connection attributes were required. We have made the necessary improvements to accommodate this requirement.
    • For Athena Connector, previously, OvalEdge did not support executing queries on Athena. Users can now execute queries on Athena. It is important to note that the update queries operation is still not supported in this implementation. 
    • HashiCorp Connector
      HashiCorp Vault is implemented to read Data source (Such as Oracle, and Snowflake) Connection passwords. The Key-value secret credentials generated in the HashiCorp instance are used to access the data source. When establishing a connection with a data source, the OvalEdge application makes a call to the HashiCorp in real time to read the secret credentials.
      • Create Key-Value secret credentials with data source details in the HashiCorp Vault.
        In the OvalEdge application, connect to the HashiCorp Connector and configure the secret details generated in the HashiCorp Vault.
      • In the Administration Connector, Hashicorp is integrated into the OvalEdge Manage Connector form. The Hashicorp is implemented using the Hashicorp Base URL and Token generated from the Hashicorp website.
      • Vault Base Url*: It is the server name/URL to connect to the HashiCorp connector.
        Vault Token”: It is the vault token generated in the HashiCorp instance.
    • Success Factor Connector
      SuccessFactors is a cloud-based human capital management (HCM) system provided by SAP, and it offers APIs based on the OData EXT API for integration and data retrieval purposes.
      • To crawl and profile data from SAP SuccessFactors, the initial step is to establish a connection with SAP SuccessFactors using the ODataExt API authentication mechanism that requires Username, Password as well as API Endpoint URL.
  • Users & Roles
    • While creating a new user now the default avatar will be displayed based on the first letter of the first name.
    • Users & Roles Management, the column "Privilege"  is Changed to "Connector Account Privileges".
    • Previously, new OvalEdge users were created for every remote user encountered during the crawling process. Now, only existing OvalEdge users will be matched if there is a match in the remote username or email, without creating new users.
    • Users & Roles Management | Previously, when crawling, a new OvalEdge user was created for every remote user encountered. If the remote username or email matched with an existing OvalEdge user, the remote user was mapped to the corresponding OvalEdge user. Now, the process only maps the remote user to an existing OvalEdge user if there is a match in the username or email, without creating a new OvalEdge user for each remote user encountered.
    • The user name is now displayed in ‘First name, Last name’ format. Additionally, in the entire application, the naming conventions will be applied to the rest of the Governance roles.
    • In Users & Roles | Connector Roles | The column header has undergone a modification, and it has been updated from "Privileges" to "Account Privileges."
    • An enhancement has been implemented in OvalEdge, introducing a new role called "User & Role Admin." This role is specifically designed to manage and control users and roles within the OvalEdge application, similar to the existing Tag Admin role. By assigning the User & Role Admin role, the user will have privileges to access and manage the Users & Roles tab of OvalEdge, providing enhanced control and administration capabilities.
    • In the Roles tab | Users now have the ability to delete multiple roles at once with a single click. Checkboxes have been enabled next to each role, allowing users to select multiple roles for deletion.
  • Security
    • Admin users can now use the Administration > Security > Application module to give access to Business Viewers or Roles providing greater flexibility and control in managing user permissions. Business Viewers logging into the application will now have access to the following modules: Home, Tags, Data Catalog, Business Glossary, Data Stories, Dashboards, Service Desk, and a few sub-modules in the My Profile module. 
    • When users edited User/Role permissions on data objects, they were not redirected to the edited role item page. Instead, they were navigated to a different page. However, the navigation has been improved and users are now directed to the correct page after making updates to role permissions on data objects.
    • Users can be granted privileges to create a new domain specific to their roles.
    • The “Add User Access option” is now available in the apply security pop-up window for Folders, Report Groups, Reports, Domain, and Story Zone.
    • The governance roles columns are modified with a search option for all the tabs.
    • Permissions pop-up Role Name column label has been changed from “Roles to “Users Roles
    • Default roles applied to data objects can grant access to all data objects based on privileges. Thus, Default Roles will not be applicable to the Applications module.
    • Approval Workflow | The “Approved By” label is renamed to “First Approved By”.
    • In Security, the Connector Creator role now has the ability to view all connection names on the Connector page. Also, the Domain Creator role now has the ability to view all domain names on the Security > Domain page.
  • Service Desk Templates
    • The Service Desk Template, now lets users integrate one service request with multiple ticketing systems. ie, a table access request can be integrated with JIRA, Service Now, and Azure DevOps Integration.
    • An additional column is provided for the users to add any comments while approving or rejecting a service request.
    • In the "Mapper" section, if a mapper (such as ServiceNow) is configured, clickable options for other mappers (such as Jira) should be disabled. This feature has been implemented, The clickable options are now disabled.
    • OvalEdge will provide hourly notifications to users who have missed to define the Service Level Agreement (SLA) on a Template.
    • While editing a Service Desk Template, custom fields were selected. However, during Field Validation, a message appears stating 'Min-Max number of JSON API Required'. It was functioning properly when configuring validation for number custom fields with Min-Max Number Range. The application has since been updated to properly handle Field Validation for custom fields, and users will no longer encounter this message.
    • Now seamlessly integrates with AzureDevOps, allowing for the creation of new tickets with ease. When a ticket is generated in the Azure DevOps service desk using this template, the standard message is enhanced to display additional information.
    • The updated message now states, “This ticket has been raised in OvalEdge and has been posted for fulfillment. Please feel free to contact your designated OvalEdge Platform SPOC (Single Point of Contact) for any inquiries."
    • The user will be given the option to customize the description of any service ticket they raise according to their needs.
    • A new "Additional Information" field in the template lets users store stage-related data in JSON format. This includes icons, names, and descriptions for customized and structured storage. Users can configure field settings like visibility, editability, and mandatory status for each stage of creation, fulfillment, or approval. Visibility determines when the field is displayed, editability controls whether users can edit, and mandatory status decides whether the field is essential for raising a service request.
    • The new enhancement allows users to integrate a service request template with multiple external systems, such as Jira, ServiceNow, and Azure DevOps. This provides flexibility for users to switch between different external systems tools and work within their preferred system. Configuring the integration options for each external system is simple and intuitive. The integration status can be set to "Active" or "inactive", and users can easily activate, deactivate or delete an integration as required.
    • When editing a Service Desk Template, the options to "Move to Draft" and "Delete Template" options are grayed out and disabled for system templates, since they are uneditable and permanent.
    • The Service Desk Templates tabular list view previously had a search icon for the columns fields 'Created By" and "Updated By". However, an update has been made to replace the search icon with a filter icon for improved functionality and user experience. 
    • Initially, the service desk template had the default priority for a ticket set as Medium. However, we have made improvements to allow you to modify the priority status according to your requirements. You now have the flexibility to choose from a range of priority options, including Highest, High, Medium, Low, and Lowest, enabling you to accurately reflect the urgency and importance of each service request.
  • Advance Jobs
    • As the existing advanced job was not supporting the bridge component. Now, ”LoadSSISFoldersWithBridge” - advanced job is added to support the bridge component
    • Added a new Advanced Job - Report Download Job - To download reports with report names, business descriptions, and labels.
    • An enhancement has been implemented to facilitate the replacement of the character '/' with '_' in table names when building the lineage between SAP Hana and Redshift connectors for tables which have the same name.
  • Custom Fields
    • After adding code custom fields and saving it, the checkboxes for Multi-select, Editable, and Viewable are now enabled.
  • Configuration
    • A new tab has been added to the Notifications tab for the admin users to customize the notification messages that are sent out to the users. In addition, the body of the message can be modified using the variables provided in the right panel. These variables correspond to particular events or features; administrators can drag and drop them into the message body. The notifications can be customized for three different mediums: Inbox, Email, and Slack, with each medium having its own tab in the Notification Templatization section, users can access it by hovering over a function and using the edit icon.
    • The user can set the threshold percentage for detecting anomalies based on the percentage of change from the last recorded value.
    • Few of the configurations the parameters drop-down displayed all the available roles, now it displays user roles associated with Author and Analytical Licenses. This change is designed to restrict access to certain functionalities to only authorized roles.
    • A History icon is enabled next to every configuration that allows users to view the history of changes made to a particular configuration about when each version was created and who made the changes. This feature can be particularly useful in situations where multiple users are making changes to the same configuration or when it is important to track changes for auditing purposes.
  • Miscellaneous
    • OvalEdge will now notify the users when the SSL certificate is approaching its expiration date.
    • In the OvalEdge Application | By hovering over any data object users can find additional functionalities such as Copy to clipboard, Quickview, and Open in a new tab.
    • The Data Quality Objects, Projects, and Impact Analysis modules are now improvised with the “Add objects” option to assist users to select multiple objects (maximum 20) at a time. Once users click on the “Add objects” button, the 'Add Object' pop-up window displays the list of all the available objects where users can click on the copy icon to select the objects.
    • OvalEdge has now included a “Base URL” of the server in the emails they send to users, which directs them to the server from where the email originated. 
      Example: If a user receives an email from the server amazonses.com, This information will be displayed in the email along with the OvalEdge email address.
    • Across the application when a search filter is used, the tooltip now displays selected options separated by commas for users to see which options are currently selected.
    • In OvalEdge Application | If  a user is moved from one group to another (for instance, from Author to Business) in OKTA Authentication, The same information will be dynamically updated without requiring the user to log into OE.
    • The interactive/clickable options are now highlighted in blue color when users hover over them across the application.
    • The username will now display the first name and last name separated by a comma instead of the email id in the “Created By” column across the application.
    • When the login page is left idle for 10 minutes and a user attempts to log in with valid credentials, the application previously generated an error message indicating "Invalid Credentials." However, the application has been modified to display the message "Session Expired" instead. As a result, users may refresh the page and successfully login without encountering any obstacles.
    • If a user is logged into both the application and the Chrome Extension simultaneously, logging out of the application will automatically log them out of the Chrome Extension.

Bug Fixes

Release 6.1 has resolved all of the listed bugs, resulting in the OvalEdge Application now functioning as intended.

General

  • While performing basic operations such as search and filter on the table column, the application experienced a 500 error and was running at a slow pace. This issue is fixed and the application is performing as intended.
  • The user was not able to find the SSIS package name consisting of spaces and special characters, in the Global Search, Build Lineage page, and Catalog Search. Now, this issue is resolved and the SSIS package name is displayed.

Home

  • Catalog Highlights section |  Previously, there were issues with the Top/Recent toggle button in the Catalog Highlights section:
  • The top results associated with different data objects such as Tables, Reports, Files, Codes, etc., were not displayed properly.
  • When the user toggles between Top to Recent or vice versa, data in the catalog highlights section is not refreshed.
  • However, these issues have been resolved now and the results for the data objects are being displayed according to the user's choice of top or recent.
  • If a project with applied advanced search filters is added to bookmarks, there is an issue with the bookmarked page not loading with the same filters selected.
  • In Global Search | The user encountered an error when trying to navigate from the table column tab. This error was caused due to the Liquibase Time Script run failure.
  • Nine Dots Menu | When a user tries to remove any term using the nine dots menu the application was not returning a confirmation message, This issue has been resolved and the application will return a message confirming the removal of the term.
  • In the “Add Term” pop-up window, the terms that were selected and added from the left existing term options drop-down were not displayed in the right panel. Now, the issue is resolved and the added terms are displayed in the right panel.
  • Nine Dots Menu | Add to Impact Analysis now displays pagination numbers displaying the current page and the total number of pages available.
  • In Global Search | When the user searches for data using the global search, the application is returning irrelevant information. This issue is fixed.

Tags

  • There was an issue where the data objects associated with the current tag were not shown in the Association's sections. The issue is now resolved, and the data objects associated with the Tag are displayed properly.
  • When governance roles were edited in the DAG tag tab, the updated Governance Roles did not appear in objects assigned to the DAG tab. This issue is fixed and now if the user changes the Governance role, objects assigned to this DAG tag will be updated in the data catalog.
  • If a tag is deleted, it is audited in the Audit Trails, which makes it impossible to create a new tag with the same name through Load Metadata from Files. It was showing an error that 'Tag with this name already exists in the audit trails', this issue has been resolved.
  • The Tag Summary page displays the number of Tags associated with different types of objects, such as Databases, Tables, and Reports. If there are no Tags associated with an object type, it displays a count of 0. 
    Example: If a table has five associated tags, the table tab count will be five. Currently, the Tag Summary page displays the Database tab even if there are no associated Tags. However, it should show the object type tab with a count of association greater than 0 by default. To prioritize showing data objects with counts greater than 0, the default tab on the Tags page needs to be changed.

Data Catalog

  • If the user navigates to any data object and hovers over the term associated with it, the business description will be displayed.
  • Despite the job logs indicating a successful update, modifying and uploading a "Schema Template" in Data Catalog fails to upload the data to the application. This issue is now fixed.
  • Previously, the business user who had Meta read permission was unable to view preview results following profiling. Despite having permission, the application displayed “0” instead of any data. Fortunately, this issue has since been resolved.
  • Previously, when a user clicked on the expanded menu, it would automatically navigate to the Database Tab without waiting for the user to make a selection. However, now it correctly navigates based on the user's selection in the expanded menu. If the user selects the Table data object then it navigates to the Table tab instead of the Database Tab.
  • The data is not getting updated when the user attempts to upload metadata using the Table Template, This issue is fixed 
  • The Code Custom Field search filter wasn't operational when code custom fields were populated to the tabular list view using Configure view. The issue has been resolved and is now functioning correctly.
  • In Data Catalog | After crawling a Power BI report, the report visualization is not getting displayed on the Data Catalog Report. Now it is fixed to display the visualization of the Power BI reports.
  • Databases
    • When the user clicks on the sort icon to sort the Schema column, the sort icon does not work properly. This issue is now resolved.
  • Tables
    • The issue with the column tab for the crawled data of the MySQL connector was not indicating the "Key" column as the "Primary Key". However, this issue has been fixed.
    • Duplicate data is displayed when crawling using the Salesforce Connector and this issue is now fixed.
    • In spite of the user having Meta Write Data Read (MWDR) permissions, the Download option of the Nine Dots was disabled, preventing the user from downloading the data. However, this issue has now been resolved, and users with MWDR permissions can download the data object data using the Nine Dots option.
    • Users with MWDR permissions were able to download the data. Now it is fixed to disable the Download Data option from the nine dots for the users with Data Read permission.
  • Table Columns
    • The tag assigned to a specific table column is not automatically assigned to the corresponding Table. This issue has been resolved.
    • The sorting option was not working on the “Status and “Created On” columns. This issue is now resolved and the sorting options are working as expected.
    • The tag that is assigned to a specific table column was not automatically being assigned to the corresponding table. However, this issue has been resolved and the tag is now assigned to the corresponding table as well.
  • Reports
    • When the user crawls the reports using the Looker connector, the application displays duplicate columns. This issue is fixed and no duplicate columns are being displayed.
    • Previously the filter conditions to sort the Path column in the Data Catalog and Security were missing These filters are now added as text values such as "Starts With" and "Contains". The update has resolved the sorting issue and now accurate sorting results are displayed.
    • When the user clicked on the highlighted text in the graph on the Summary page of any report, an error occurred, and the report was not properly displayed. The issue was caused by an incorrect request parameter. However, this issue has been resolved, and now the user is able to visualize the report.
    • Users were facing an issue visualizing reports once the lineage when built for Qlik Sense, This issue is now fixed.
    • After changing the column status from active to inactive, users were allowed to select the visible type, instead of displaying the visible type as 'invisible'. This issue has been resolved and the application is working as intended.
    • The user was unable to view pages in the Data Catalog after crawling the PowerBI reports. Also, was unable to build the lineage for PowerBi Reports. (Downstream lineage). This issue is now resolved and the user is able to view pages after crawling has been performed.
    • When a Qlik Sense connector is crawled, the Report Types column does not display the filter option for all types except "NON-Tagged", "Dashboard", and "QVDBuilders". 
    • In the Build Auto Lineage page, the Object Type column does not accurately display the expected object types, such as "NON-Tagged" or "Dashboard".
    • However, Both the issues regarding the Qlik Sense connector are resolved and the application is working as expected.
  • Report Columns
    • The user was unable to view report columns in the Data Catalog after crawling the PowerBI reports as a specific file named “PBIX” was not available. This issue is now fixed.
    • In the association tab, the user was unable to search table names to associate them with the code from SQL Server table objects. This is fixed and displays the table names accurately.
    • When the user tries to add a term to the object using the  “Copy Title to the Catalog” and remove that term, then the term was removed from the object and the title of the term was displayed. Now, this issue is resolved and when the term is removed even the title of the term is removed.

Business Glossary

  • When Objects are added to the related objects for a term then it should display the full path as Table > Database_Name.Schema_Name.Table_Name. This issue has been resolved and the path is displayed in the format mentioned above
  • Data Objects tabs, the recommended data objects from AI recommendations are not appearing in the Associated Object list, even though they were accepted. This issue is now resolved.
  • Recommendation was working only for the admin user and not for a team. Now, the recommendation works for team users.
  • Related Objects | When adding related data objects, the Save button was not enabled but the data objects were still added without it. This issue has now been resolved and the data objects are added as related objects only when the “Save button” is clicked.
  • When a user clicks on a term and adds associate data for tables and columns the count is not visible on the summary page. This issue is resolved and the summary page is now displayed.
  • For Table, Columns are not displaying the correct count of data objects and are showing zero even if there are multiple data objects associated. This issue is now fixed and the right count of data objects.
  • When a user created a custom view and changed the order of the fields displayed in the default view (for example, moving the Custodian field to the start and Domain to the end), the field order was reversed from what the user specified. Additionally, if the user attempted to reorder a field (such as moving Status to the top) and then saved the view, the fields were saved in the wrong order. However, these issues have now been resolved, and the "Configure View" feature is functioning as intended.
  • After downloading the Term Details in an Excel sheet, the “Show Classification” column displayed “No” despite enabling “Show classification at dictionary” under Manage Data Associations. Now it is fixed to display “Yes” when the “Show classification at dictionary” option is enabled.
  • When a user adds related tables for a term, the information about those related tables do not appear in the Data Catalog > Table > References tab. However, this issue is resolved and now the related tables added for a term are displayed in the Table References tab.

Data Stories

  • The application would freeze when formatting a table, requiring the user to perform a hard refresh to get the application running again. This issue has been resolved and the user no longer needs to reload the application.

Dashboard

  • The Data Quality Score Report was not displayed due to a due internal server error message. The internal server error message is resolved and the user is able to view the Score report.

Service Desk

  • While creating a term creation request for the same term name under the same domain, the application was not showing any error message. Now it is fixed to display the error message.
  • For an API Access Request using a custom template, while adding a team to the approval workflow, the users removed from the team were also displayed. Also, after the request was raised, in the service request summary, duplicate users were displayed under the approval workflow.

Governance Catalog

  • Data Classification
    The page does not refresh automatically when a tag is added to the domain from the nine dots menu, If a manual refresh is performed the newly added tag is displayed. However, the respective domain and its details are not displayed. This issue has been resolved and the respective domain details are being displayed.

Projects

  • After a user selects a domain and adds it to the relevant category, and when the user again tries to select another domain from the dropdown, the category field is not reset. This issue is now resolved, and the category field is being reset.
  • Previously, when users with the OE_Public role attempted to delete a project, an error would occur. Similarly, when users tried to delete it from Projects | Board View | Nine dots, a success message stating that the selected project was deleted would be displayed. However, despite the message, the deleted project would still appear on the Projects page. This problem has now been resolved and deleted projects no longer appear on the Projects page.

File Manager

  • For NFS Connector, when a user uploaded a folder with files, only the folder was displayed in the File Manager, but the files within that folder were not visible. However, this issue has now been resolved, and both the folders and files are displayed in the File Manager.

Jobs

  • The appropriate job log was not getting displayed while creating a Business Glossary Terms, which is fixed, and accurate job details are being displayed.
  • The Job status was not changing from the “INIT” state to others after crawling a database from Administrator | Connectors, The status of the job is now displayed accurately.
    • A similar issue while crawling a Power BI report the job logs were not displaying the right values has also been addressed and fixed.

Advanced Tools

  • Impact Analysis
    • The schema name for the impacted objects was not displayed on the downloaded impact analysis results, for better understanding for the user the schema associated with the impacted objects is now being displayed.
  • Lineage Maintenance
    • When a user clicks the Add button, the “Add / Edit Column Mapping” window is displayed. As soon as the user added a new row, it was appended at the top of all the existing rows rather than at the bottom. This is now resolved and the 
  • Build Auto Lineage
    • In the Build Auto Lineage feature, there was an issue where selecting codes on the Build Auto Lineage page resulted in all codes from other pages being automatically selected, even if the user did not explicitly choose them. As a result, the lineage was built with unintended code selections. However, this selection process has been improved, and now the lineage is built only with codes that the user explicitly selects.
    • In the Build Lineage > Correct Query page, the queries containing "Create or Replace view" statements are failing. This issue is now fixed.
  • Compare Schemas
    • Users were unable to add objects by grouping like adding deleted objects, adding modified columns, or adding new objects to impact analysis as affected objects. Now a provision has been provided.
    • After adding the “Changed Column to Impact Analysis”, the incorrect connector name was displayed for the objects in the Impact Analysis summary page. Also, when the user downloaded the impact analysis results, all results for the affected objects were not displayed
  • Load Metadata from Files
    • There is an issue adding tags to a term through the "Load Metadata from Files" Business Glossary template. Now, this issue is resolved now, and the application is working as intended.
    • There was an issue where the Tags associated with Report Column were not being displayed when the column was downloaded. This issue is now fixed, and the tag associations are being displayed.
    • When a template lacks a header, the expected behavior is for the job to fail. However, the log displays random table names that are not present in the CSV (even though no metadata is actually modified), and the job is erroneously marked as having run successfully. This issue is now resolved and the job log displays accurate details.
    • The application was returning an error when users were trying to download the Business Template. However, This issue is now fixed and the users are able to download the template.
    • The application is returning an error when the user attempts to upload metadata using the Table Template, This issue is fixed, and the users are now able to upload metadata using Table Template.
    • Report or Report Column Template, when attempting to download the template along with its data users encountered an error message stating "Error while creating a CSV file". This prevented users from downloading the file. However, this issue has now been resolved, and users are able to successfully download Report or Report Column Templates along with their data without encountering this error message.
    • The associated tag name in the application was different from the template with data for a few objects.
    • The detailed description of a tag was not getting updated using the business glossary template.
    • Users were encountering 504 gateway-timeout errors while downloading the business glossary template. This issue is now fixed and the users are able to download the business glossary template.
    • The heading of the “Cronentry” column in the “Data Quality Rule template without data” is modified with the scheduling pattern to assist users to enter the time to schedule the Data Quality rule in the given pattern.
    • After downloading the Business Glossary template with data the “ADD” option was displayed in some other field instead of the 'Action' column. Now, it is fixed to display the “ADD” option in the “Action” column.
  • OvalEdge APIs
    • Tags API | The “GET APIs” are displaying the Parent Tag names and Child Tag names as Null causing confusion in knowing the hierarchy of the respective tags. However, This issue has been fixed and the accurate names of the tags are being displayed.

Administration 

  • Connectors
    • For the Looker Connector, previously, the system was unable to retrieve all the Report Groups defined in the data source and display them in the Data Catalog Reports. However, this issue has now been resolved, and the system is able to fetch all the Report Groups that exist in the data source and display them in the Data Catalog Reports
    • When editing the connection details of an ADP connector, an error "Error while fetching existing connector Attributes: null" is displayed, This issue is now fixed.
    • The tables for the SOP BOD Connector were missing when users attempted to run the temporary lineage correction due to a versioning issue. This issue is now fixed.
    • While building lineage, the Looker connector queries failed due to an error in a query. The errors in the query have been fixed and the application is working as expected.
    • There was an issue with creating temp lineage tables in the Tableau connector for building lineage from Tableau to Snowflake. This issue has been resolved and the users can now create temp lineage tables.
    • While validating the Power BI reports there were some null exception errors. Now it is fixed and users are able to validate the Power BI connection.
    • The AWS Secret Manager Connector failed to establish a valid connection for the Role Based Connection type, although the bridge was running(Live).
    • For SQL Server Connector, the lineage for column level mapping between (#FBNK_ACCOUNT_PREV to #ALL_FBNK_ACCOUNT) was not displayed correctly, even though the selected query was corrected and validated for the SQL Server Connector. This issue is now resolved and the column level mapping is being displayed accurately.
    • The application is returning an error when the users were crawling the data source using the Tableau- Online Connector, This issue is now resolved and users can now crawl the data source and the application is not returning any error.
    • If the user encounters any authentication error while accessing the Data Source and attempts to crawl the metadata on the Ovaledge application with the Powerbi connector then the already existing metadata is getting deleted. This issue is fixed and the existing metadata is no longer getting deleted.
    • The application is returning an error when the user is attempting to build lineage using the Denedo Connector, This issue is now resolved.
    • The Deltalake Connector supports two types of databases (Regular and Unity). On the Manage Connection Form, the Deltalake connector was added and validated with the Regular Database type. When the user attempted to change the database type from Regular to Unity, an error message appeared. This issue is fixed and now using the drop-down menu user can change to another database type.
    • The Workday Connector cannot be validated and no validation errors were being displayed. This issue is now fixed and the users are able to validate the Workday Connector or  the application is returning validation errors if the validation fails.
    • In the Azure SQL Managed Instance Connector, When profiling the views, there are no profile results, now this issue has been fixed and the profile results are being displayed.
    • Duplicate unprocessed lineage status was shown. This issue is fixed and lineage is displayed accurately.
    • While Adding a Connector, even if the user did not select any roles for Integration Admins and Security and Governance Admins - which are both mandatory - and clicked on the Validate button, the connector would still be validated successfully. However, this issue has now been resolved. If the user attempts to validate the connector without selecting roles for Integration Admins and Security and Governance Admins, an error message is displayed prompting the user to choose these roles. This issue is now fixed.
    • While building the lineage using Tableau Connector, The existing table is not sourced instead a temp lineage table was created.
    • In Settings | Profiler, when a user attempts to change the Profile Type, an error message "Problem Updating Settings" is displayed. However, this issue has been resolved and users are now able to successfully change the Profile Types - such as Auto Disabled, Query, and Sample - for the connector without encountering this error message.
    • The PowerBI Connector crawler page and Auto Build Lineage were not displayed as a specific “PBIX” file was missing. This issue is now fixed and the option is getting displayed.
    • For Azure Data Lake Connector, there was an issue where even if the user provided the correct details for establishing an Azure Data Lake connection, a validation error would occur and the error logs would not be displayed. However, this issue has been resolved, and now users can successfully establish an Azure Data Lake connection with the storage account details without encountering validation errors or missing error logs.
  • Users & Roles
    • Users when a new user was added by providing their details and email address, the user would receive an email containing the OvalEdge Application URL and their login credentials. However, when the user clicked on the URL in the email to access the OvalEdge application, the URL was not working, and the user was unable to access the application. But now this issue is resolved, and the URL received by the newly registered user to access the OvalEdge application is working properly.
    • The "Role Description Field" was taking up excessive space, which has been fixed.
    • When a user is deleted, the permissions of a user associated with a database object are transferred or deleted and permissions for those users should not appear in Administration > Security.
    • An error message "There is some problem getting a result" appears when selecting a Snowflake connector and when clicking on the connector role tab.
  • Security 
    • When a user clicks on the Edit icon in the Available Roles Column to apply security, previously the User Interface navigates to an “Update Permission” page for Database, Tables, Files, and Folders, while a pop-up window appears for other remaining data assets. However, for improved user experience, now the Update Permission page has been made uniform throughout all the available data assets in the Security module.
    • The “Reset Permission” option is added, and “Removed Update Permission” is removed from the Nine dots options and a custom row access policy section has been added. 
    • In the Edit permissions screen for all RDAM connectors, the 'Role Name' label is changed to 'Users / RoleNames'.
    • The OE Admin user was unable to see the tables in the tables tab. This issue is now resolved.
    • Replaced the Save button with a checkbox, making it more intuitive for users to save their changes. There were issues with saving a category and sub-category name that has been selected but modified. Now, the issue is resolved and working fine.
  • Service Desk Templates
    • A new custom template is added and published at the connector level for the Object type table. Although the template was published, it was not available at the connector level. Now, this issue is resolved and the template is available at the connector level.
    • The Business Glossary template  header was not properly visible due to the black font color and blue background cell color. To improve the visibility of the template headers, the font color has been changed to white, and the cell background color has been changed to blue.
    • Term creation request, it is possible to clone a Term Creation Request template. When the user edits and saves the cloned template, it displays an error message due to missing fields during the cloning. This issue is now resolved.
  • Advanced Job
    • When building lineages that have the same table and column names, the source is now considered to be Folder ID instead of File ID, and the log value is displayed in Job Logs.
    • There was an issue executing the advanced job - “Get queries from Vertica by using a query”, This issue has been fixed and the users are able to execute the advanced job.
    • In the deep analysis project, when a single schema was selected for crawling, the job log showed proper results, but when multiple schemas were selected, String Error was displayed. This issue is fixed and it shows the correct result when multiple schemas were selected.
    • The source is considered Folder ID instead of File ID when building lineage with the same table and column names, and the log value is displayed in Job Logs. This issue is now fixed and the right source is considered to build lineage.
  • Custom Fields
    • When local custom fields are added to a connector, the default (existing) global custom field values are not displayed. Adding a custom field should not remove existing values. It should display both local and global custom field values.
  • Configuration 
    • The "ovaledge.connector.creator" and "ovaledge.domain.creator" should be limited to the "Author User" value, specifically OE-ADMIN. If any value other than "Author User" is entered, an error message should be displayed to indicate that the role is not available. This issue is now fixed and an error message is displayed if the role is not available.
    • The issue where the "Updated By" and "Updated On" columns fail to update automatically when the user makes changes to the "Value" column is resolved.

Hotfix Release6.1.0.7

OvalEdge Release6.1.0.7 is a hotfix release that includes improvements to the connectors and advanced tools.  

In this release, the critical and significant bugs associated with the Data Catalog, Business Glossary, Service Desk, My Resources, Advanced Tools, and Administration modules are fixed and working as expected.

Release Type   

Release Version

Build

<Release. Build Number. Release Stamp>

Build Date

Hotfix Release

Release6.1.0.7

Release6.1.0.7.6107.a226471

31 July, 2023

Improvements

  • Connector

    Azure Data Factory (ADF): This release introduces support for crawling "Pipeline_Run_params" objects into OvalEdge and includes the implementation of loggers to track and identify root causes for any encountered difficulties. Job logs now provide valuable insights for effective issue diagnosis. Furthermore, this update resolves the previously encountered crawling process issue for "Pipeline_Run_params" objects into OvalEdge, ensuring successful completion. A code fix has been planned to address the root cause and will be implemented at a later stage.
  • Advanced Tools

    In the "Load Metadata from Files" feature within Advanced Tools, specifically in the "Business Glossary Templates" section, a known issue was identified when uploading new terms for the first time. The governance roles associated with the newly uploaded terms failed to update as intended. However, this issue has been successfully resolved. Now, when uploading new terms for the first time, the governance roles are correctly updated as expected.

Bug Fixes

The following bugs have been fixed in this release, and the application is working as expected.

Data Catalog

  • In the Data Catalog, users faced an issue where after running a profile job for Azure Data Lake (ADL) in OvalEdge, they were unable to view sample data under the DATA TAB. The DATA TAB displayed a message indicating that no data existed, even though the profiling process had been completed. However, the issue has now been resolved. Users can now access and view the sample data in the DATA TAB successfully after the profiling process.

  • In the Data Catalog > Reports tab > The report column filter displayed an incorrect count of reports when compared with the total count of reports. This issue has now been resolved and the Report Type filter now accurately reflects the correct reports present in the catalog.

  • In the Data Catalog, users encountered an issue where the "Add to Impact Analysis" option from the Nine Dots menu was not clickable which prevented users from adding data objects to the Impact Analysis. This issue has now been resolved and the option is made clickable and fully functional.

Business Glossary

  • In the Business Glossary, the masking feature encountered issues with the "Number" and "Date" column types. Users reported that when adding new policies for these column types, the policies were not applied to the data as expected. However, this issue has been resolved. Now, users can successfully add new policies for the 'Number' and 'Date' column types, and masking is accurately applied to the designated columns as intended. 

  • In the Business Glossary, specifically on the Summary Page, there is an issue where clicking on one checkbox unintentionally checks all the other checkboxes. This issue has been resolved and the checkboxes now function correctly.

  • In the Business Glossary, an issue was encountered where the "Approval By" and "Approver Date" fields are displayed when the term is in the "Draft" status. This issue has now been resolved and the fields are displayed only when the term is in the "Published" status. In the "Draft" status, these fields are appropriately hidden.

  • In the Business Glossary, when users select the "Copy to Steward," "Copy to Custodian," and "Copy to Owner" checkboxes and then publish the term, the expected results are not updating for that data object. This issue has been resolved and the selected checkboxes effectively update the steward, custodian, and owner fields for the published term, providing accurate and up-to-date information in the Business Glossary.

Service Desk

  • In the Service Desk module, users encountered spacing issues while creating new service desk templates. The problem was specifically related to the “filecolumn” and “GlobalDomain” objects, which resulted in errors during the template creation process. However, the issue has been resolved, and users can now create templates without facing any spacing issues related to the filecolumn and GlobalDomain objects. The template creation process is now smooth and error-free.

My Resources

  • In the "My Resources" > Notification section, users had reported an issue with the notification emails. The problem was related to the incorrect display of the special character “$” in the content of the emails. Additionally, the type of object referred to in the notifications was not spelled out, causing confusion and making it difficult for users to understand the context of the messages. However, the issue has been resolved, and users no longer experience any problems with special characters appearing incorrectly in the email content.

  • In the "My Resources" > Notifications section, users encountered a recurring issue where they received the same notification alert every half an hour for terms that were created. This constant and repetitive stream of notifications caused inconvenience and potential information overload for the customers. However, the issue has been successfully resolved. Users are no longer receiving the same notification alert repeatedly for the terms that were created. 

  • In the "My Resources" > Notifications section, users encountered an issue with email entries being duplicated when a term is suggested through Load Metadata from Files. This issue has been resolved, and users will now receive a single email notification for each term suggestion, eliminating redundancy.

  • In the "My Resources" > Notifications section, users are experiencing the issue of receiving multiple emails for a single action. This issue has been addressed, and the system now sends only a single email for each action performed by the user.

Advanced Tools

  • In the “Impact Analysis”, users encountered an "HTTP status 500 - Internal Server Error" when they tried to access Impact Analysis. This issue prevented them from analyzing data dependencies effectively. The issue has been resolved, and users can now access Impact Analysis without encountering any "HTTP status 500 - Internal Server Error".

  • In the "Load Metadata from Files" > "Business Glossary Templates" section, users encountered an issue where they were unable to load the Business Glossary Templates. This problem impacted their access to the predefined templates, hindering their ability to maintain consistent data terminologies and definitions. However, the issue has been resolved, and users can now access and utilize the predefined templates without any problems. 

  • In the "Load Metadata from Files" > "Business Glossary Templates" section, users encountered issues related to data associations, custom fields, tags, and governance roles. The checkbox behavior was incorrect, but refreshing the page resolved it. Moreover, the problems with tags not applying correctly through Load Metadata from Files and governance roles not updating as expected were also addressed. The issue has been resolved, and now all the issues related to data associations, custom fields, tags, and governance roles in "Load Metadata for Business Glossary" have been successfully fixed.

Administration 

  • In the Administration | Security | Databases | List View, the Custodian filter is not functioning as expected. When users select specific custodians using the filter, the corresponding results fail to display in the list view. This issue has been successfully resolved and the relevant results are correctly displayed in the list view

  • In the Administration | Security | Databases | In the Status filter, when both the "Active" and "Inactive" checkboxes are selected only the "Inactive" results are displayed. As a result, users are unable to view the complete list of databases with both active and inactive statuses. The issue has been fixed, and the status filter now functions correctly.

  • In the Administration | Security | Applications, there is a tooltip issue where it incorrectly displays "disabled" even when the toggle for the application is set to "enabled." This issue is now resolved and the tooltip is displaying the correct status.

Connectors

  • For Azure Synapse Connector, users encountered an issue where certain procedures within specific source codes could not be parsed, leading to the inability to build lineage for those codes. This limitation hindered the ability to trace data flow and dependencies accurately. However, the issue has been addressed and resolved effectively. The system now successfully parses and interprets all procedures during the lineage-building process, enabling users to establish lineage for each source code accurately. 


Hotfix Release6.1.1.1

The latest OvalEdge Release6.1.1.1 brings substantial enhancements to various modules and licenses. Users can now efficiently manage data through improved Data Catalog features, enhanced Lineage visualization, a more user-friendly Business Glossary, and streamlined Service Desk operations. The system also offers advanced Data Quality Rule customization, updated Advanced Tools functionality, and numerous administrative enhancements for improved system performance and security. New connectors expand data source accessibility, while additional system settings provide users with additional control over application behavior. Furthermore, a set of new advanced jobs empowers users to perform diverse data management tasks with precision. 

This release also resolves critical and significant bugs in various areas, ensuring a smoother experience.

Release Type   

Release Version

Build

<Release. Build Number. Release Stamp>

Build Date

Hotfix Release

Release6.1.1.1

Release6.1.1.1.6111.06c9918

26 September 2023


Improvements

OvalEdge Licenses

  • The Contributor License type for users, introduced in Release6.1, is being discontinued. Existing Contributor License users will be upgraded to Author License users.

Global Search 

  • When a term was created without adding a Business Description and later if that term was associated with multiple objects, searching with Business Descriptions did not display the associated search terms in the elastic search results. The improvement now ensures that the search terms related to Business Descriptions are accurately shown in the elastic search results.

Tags

  • The 'Tags' feature has been enhanced to allow users to choose a parent tag and view all associated child-tagged items in a list view format. This improvement offers users a more convenient and efficient means of observing and managing the hierarchical relationships between parent and child tags within the system.

  • In the ‘Tags’ section, the "tag.multiple.parent.hierarchy" configuration has been modified. When set to "false," it now displays the entire tag hierarchy in the application, whereas the default value is "true." 

Data Catalog

  • In the Data Catalog, a filter has been added to facilitate searching for null density, density percent values, masking, and restriction. By incorporating these filters, users will have the ability to easily identify data objects based on their null density, density percent values, and masking attributes.

  • In the Data Catalog, users were unable to apply the Domain filter to the system view while exploring datasets. This functionality has been enhanced, and now users can apply the domain filter to the system view.

  • In the Data Catalog, a download option was not available on the References Pages of all Data Objects. Users were unable to access the download option in the reference tab of the code section. This has been improved, and users can now download the references of the code from the Data Catalog. 

  • In the Data Catalog, the "9 dots" menu included an option to apply certification to an object. The certification popup lacked a history icon. Now, the enhancement introduces a history icon in the certification popup. This addition allows users to track and view changes made to the certification status of objects over time, providing improved transparency and auditability.

  • In both the Data Catalog and Security, there was a lack of consistency in terminology when referring to databases and schemas. While it mentioned 'Database' in some contexts, it actually displayed 'Schemas' in those instances. This has been enhanced, and a 'Database' column has been added in table plugins.

  • In Data Catalog > Reports, users encountered a limitation where visual representation of SSRS and Power BI reports was not possible. The same issue was also present in the 'Dashboard > All Reports' section. This functionality has been enhanced and users can now access visual representations of these reports by selecting the 'open in SSRS' and 'open in Power BI' options available in the '9 dots' menu.

  • In the Data Catalog, users encountered session timeout issues when attempting to download table column data from the data catalog into a .CSV file. The session timeouts occurred while navigating to the table column tab in the data catalog and clicking on the Download button. To resolve this problem, changes have been implemented to optimize the download process. As a result, users can now download table column data from the data catalog into a .CSV file without experiencing session timeout issues.

  • In the Data Catalog > Reports, when users added downstream objects to reports, the importance score did not increase as expected. The functionality has been enhanced by changing the configuration (lineage.creation.increase.object.importance) in system settings to True. Now, when users add downstream objects to reports, the importance score is accurately calculated and increased. The improvement ensures that the importance score reflects the addition of downstream objects, providing better data management.

  • In the Data Catalog, the Reporting Framework pertains to the visibility of bar charts when there are fewer associated data objects, the chart is not visible. This has been improved, and now the bar charts are visible even when there are fewer data objects associated with them.

  • In the Data Catalog > Tables, when downloading table metadata from the summary tab, special characters did not appear in the downloaded file. The issue impacted the accuracy and completeness of the downloaded metadata file. This has been improved and users can now download table metadata from the summary tab in the Data Catalog's Tables section, and the downloaded file will include special characters as expected.

  • In the Data Catalog > Configure View, the display of "TRUE" values in the data classification column will be consistent for all object types, including Table Columns, when users create views after selecting a classification.  After the improvement, users will be able to see the accurate "TRUE" values in the Classification Column for all object types, including Table Columns, when creating views after selecting a classification in the Data Catalog's Configuration View.

  • In the Data Catalog > Files, a new configuration has been added to enable users to hide the "Profile All Files" feature located under the 9 dots menu. With this configuration, users gain increased flexibility in managing access and visibility to the "Profile All Files" functionality within the Data Catalog. This enhancement allows users to customize their Data Catalog experience according to their specific needs and preferences.

  • In the Data Catalog > Objects > Summary > Tags, there was a need to display tag descriptions in the tooltip for child tags, along with their tag hierarchy. This functionality has been enhanced, and now the tag descriptions are displayed in the tooltip for Tags.

Lineage Enhancements

Here are the enhancements made to the Lineage feature in the Data Catalog,

  • Data Catalog -  A new column named "Has Lineage" has been added to the List View, which displays a lineage icon for data objects that have lineage and remains blank for those without lineage. Users can filter lineage information by applying "Yes" or "No”.
  • Data Catalog > Codes, Introduced a graphical view of lineage alongside the existing tabular view for codes to display source and destination lineage information for specific codes. Also, introduced a new column named "Has Associations" to display whether a code has associations (Yes or No), allowing users to filter results accordingly.
  • Data Catalog > Lineage Tab

    • Introduced a Tabular View for all data object types to list source and destination objects associated up to the first level. 

    • Made lineage information downloadable, with options to select upstream, downstream, or all lineage levels using radio buttons.

    • Implemented a tabular view within the column mapping window for viewing. A download icon has been provided to download the source and target columns.

    • Added a Certification Icon on data objects within the lineage, providing information about the certification status. Users can click on the Certification Icon to view the certification history in a pop-up window.

    • Displays Data Quality (DQ) scores on data objects involved in the lineage. Users can click on the DQ score to access a detailed DQ dashboard.

    • Enabled zoom-in, zoom-out, and full-screen options for improved navigation and interpretation of lineage diagrams.

  • Lineage Maintenance, Added a "Lineage Source" column that displays the lineage building type (e.g., auto, manual, API) for better understanding and tracking of lineage sources.

Business Glossary

  • In the Business Glossary, users now have the option to exclusively view 'PUBLISHED TERMS' when accessing a domain, with 'All Terms' being hidden. This functionality is managed via a toggle button, with the default setting set to 'yes.' When the toggle button is disabled, users will only have visibility of draft terms.

  • In the Business Glossary, users were unable to retrieve DATA and Pattern Scores when running AI Recommendations. The functionality has been enhanced, and now DATA and Pattern Scores are retrieved when running AI recommendations.

  • In the Business Glossary, the Category & Sub Category did not have any description fields; in the enhanced version, description fields have been added for both Category & Subcategory enabling users to provide detailed information and descriptions for the categorized business terms and concepts. 

  • In Business Glossary > Tags, requiring manual tagging created the potential for errors. This functionality has been enhanced by running an advanced job "AssosciateTermTagsToObjects"; users can now assign tags to terms and their associated columns automatically.

  • In the Business Glossary, users now suggest a term within their preferred category. When submitting a service request, if a domain is selected, the associated steward for that domain will receive a request for service request approval. However, if a category is chosen, the associated steward at the category level will be sent a service request for approval.

  • In the Business Glossary, a request was made to implement color coding for terms during the creation process. This functionality has been enhanced, allowing users to assign specific colors to different terms in the Business Glossary, providing visual differentiation. To change the color of the domain, users need to go to the Security > Domains and edit the domain name, and select a color.

  • In the Business Glossary, users can now conveniently select governance roles in bulk under Terms. This feature is available within the Load Metadata from Files module, enabling streamlined management of governance roles while loading metadata from files and facilitating more efficient data management and governance processes.

Dashboard

  • In Dashboard > All Reports, the report group name was previously not visible when users hovered over reports, making it challenging to identify the group to which the report belonged. This enhancement now displays the report group name when users hover over any report.

Service Desk

  • The Service Desk Template Approval Workflow was enhanced to highlight duplicate entries in red. When users enter the mandatory details and add multiple teams and users as approvers, the system automatically highlights any duplicate entries in red, making it easier to identify and correct them promptly.

  • In the Service Desk, the Ticket Closing option was implemented, causing some templates to have it available while others did not. Users could see the close option for resolved tickets when accessing specific tables, but it was missing for schema access request tickets. The functionality has been improved, and the Ticket Closing option is now fully implemented in the Service Desk. Users can now see the close option for all tickets with Final Status, including both resolved tickets (indicate action taken to address an issue) and successfully fulfilled tickets (confirming both technical resolution and customer satisfaction). 

  • In the Service Desk, all new term creation requests were automatically directed to domain owners/stewards, causing an overload of requests. This functionality has been enhanced, and users can now specify the "Category" and, if applicable, "Subcategory" of the term in the request form. This ensures that new term creation requests are correctly routed to the appropriate stewards/owners, reducing the overload of requests sent to domain owners/stewards.

  • In the Service Desk, the error messages have been revised to make them clearer and more accurate, preventing user confusion or misinterpretation.

  • In the Service Desk, the "Bring your own fulfillment" allows users to integrate their own fulfillment processes and systems with the Service Desk platform. This enhancement enables organizations to customize and streamline their fulfillment workflows, leveraging their existing tools and processes to manage service requests efficiently. 

  • In the Service Desk, multiple tickets used to be generated for the same Data Quality rule when it failed multiple times, leading to ticket accumulation. This issue has been addressed by introducing a feature that allows bulk selection and management of "Violated Data Quality Rule" tickets. Users can now approve or reject all tickets related to one rule simultaneously, making ticket handling more efficient.

Data Quality 

  • In the Data Quality > Data Quality Rules module, a modification was made with the main objective of enabling users to effectively utilize custom SQL rules by leveraging a pre-established stored procedure. This enhancement provides users with the capability to define and execute their specific SQL rules, making use of the functionalities offered by the system's existing stored procedure.

  • In the Data Quality > Data Quality Rules module, enhancements have been made to enable query execution support for the Athena connector, including the query sheet and GDQ (Governed Data Query) support that was not available before. 

  • In the Data Quality > Data Quality Rules module, when a Data Quality Rule failed, it used to create multiple tickets upon re-running the same rule for the same issue. Users were required to approve or reject all these tickets associated with one rule at once. This functionality has been improved, and now only one ticket is raised for the same issue, streamlining the process.

  • In the Data Quality > Data Quality Rules module, previously when users removed additional associates, the Data Quality Score did not update. This issue has been resolved, and now the Data Quality Score correctly updates when users remove additional associates from association. The "Additional Associations Count" field has been implemented to display the count of associations with the Data Quality Rule. Moreover, the Service Request (SR) Score Donut chart now accurately represents the SR score by filling proportionally to the numeric value.

  • In the Data Quality > Data Quality Rules module, previously when a Custom SQL Rule failed and had Associated Objects with Additional Associations, Service Requests (SRs) were not being generated for each of these Additional Associations, instead generating a service request for the root Associate objects. This functionality has been enhanced, and now, each Additional Association object generates a service request.

My Resources

  • In My Resources > My Watchlist, in the context of Data Quality Rules (DQR), when users previously checked the 'Metadata changes' tab, terms were displayed for DQRs, even though the latest system version didn't utilize terms at the DQR level. This functionality has been enhanced, and users can no longer see terms in the metadata changes in Data Quality Rules, My Watchlist.

  • In the "My Resources" section of "My Permissions," we have added a flag or rating feature to both the table column and the file. Prior to this enhancement, users were unable to mark or rate specific table columns or files within the "My Resources" section. With the new improvement, users now have the ability to flag or rate individual table columns and files. 

  • In My Resources > Inbox and Service Desk, previously the reminders only displayed time remaining in hourly and minute intervals, which caused limitations in understanding the remaining time precisely. This functionality has been improved, and users can now view the exact timestamp, showing the remaining time down to the seconds, for all tabs in inbox and Service Desk. 

  • In My Resources > My Watchlist > Tables and Table Columns, there was previously no "Tags" column. This made it difficult to identify the tags and terms associated with items. This functionality has been enhanced by adding a "Tags" column, which provides users with a clear and organized view of the origin of each item.

  • In My Resources, integration of the Collaboration Messaging system with a teams/email/slack application has been implemented to facilitate seamless communication and collaboration between these different platforms. This integration enhances overall efficiency and promotes teamwork within the organization.

Query Sheet 

  • In the Query Sheet, users did not previously receive error messages in the result pane when SQL queries failed. This required users to manually check logs for error details. This functionality has been enhanced, and users can now appropriately see error messages in the result pane.

Advanced Tools

  • In Advanced Tools > Impact Analysis, the "Source Objects" heading previously appeared as if it was clickable, causing confusion for users. It was positioned close to other interactive buttons. The functionality between the "Source Objects" heading and other interactive buttons has been enhanced, improving the user experience by reducing confusion and making it clearer that the "Source Objects" heading is not an interactive element.

  • In Advanced Tools > Load Metadata from Files, previously when users uploaded terms for the first time using the 'Load Metadata' feature, the Governance roles associated with these terms were not updated as expected. This functionality has been enhanced, allowing users to assign Governance roles to terms created for the first time.

  • In "Advanced Tools" > "Load Metadata From Files" of "Business Glossary," the downloaded template had inconsistent sheet names. They did not match the names used in the application, i.e., "Related objects" and "Associated Data." This functionality has been enhanced, and the downloaded template in "Advanced Tools" > "Load Metadata From Files" of "Business Glossary" now has consistent sheet names that match the names used in the application, i.e., "Related objects" and "Associated Data."

  • In Advanced Tools > Build Auto Lineage, when users previously validated a query on the "Correct Query" screen after building lineage, there was no loader symbol to indicate whether the validation was in progress. This issue has been resolved, and users can now see a loader symbol while validating a query.

  • In "Advanced Tools > OvalEdge API's > Business Glossary API," there is a requirement to modify an existing API by making certain fields invisible for Term Search. This enhancement aims to enable the retrieval of term details using a term ID.

  • In Advanced Tools > OvalEdge APIs> Business Glossary API > In ADD Terms endpoint, users were previously unable to perform update and delete operations. The new enhancement has been implemented in such a way that a new endpoint is developed as Add or Modify Term where users can perform both add and update operations. Also, the delete functionality was deleted from these endpoints.

Administration

  • In Administration > Connectors > ADF (Azure Data Factory), lineage flow is commonly established between data sources and destinations, like tables and files. It is equally crucial to construct lineage connections between activities, including their names, to fully comprehend the lineage flow. Previously, users were unable to search unstructured data files. This issue has been successfully addressed, empowering the team to scan and extract pertinent title information from unstructured data files. This enhancement significantly bolsters data search and analysis processes through the utilization of the CIFS (Common Interface File System) Protocol.

  • Within the Administration > Connectors framework, a recent addition has been made to the Connector page for Salesforce integration. This new feature introduces the option to set a KeyStore password and alias, thus enhancing the security of data transfers through Key Pair authentication. 

  • The Administration > Connectors > Dell Boomi Connector has undergone several improvements to enhance its functionality and usability, 

    • Relationship enhancement between the process and sub-process.

    • Building the lineage between the map component's source and destination

    • Relationship enhancement between Boomi SQL and the process. 

  • In Administration > Connectors > RDAM, the masking policy was not effectively restricting VIEW access after being disabled for objects or tables and their columns, compromising data security and privacy.  The masking policy has been enhanced,  to ensure that the masking policy is effectively restricting VIEW access and protecting data security and privacy.

  • In Administration > Connectors, The Looker reports previously involved missing lineage details for specific report types, Looker Dashboards using merged queries. This functionality has been enhanced, and the lineage reports are now correctly generated, which allows users to track the lineage of data in these reports.

  • In Administration > Connectors, Salesforce Row count permission, Users were encountering difficulties in accessing the Row count due to the mandatory high-level permission. This functionality has been enhanced, and now users can access the necessary data while adhering to the client's permission constraints.

  • In Administration > ADM Connectors > The auto lineage option previously remained enabled for connectors like MongoDB, Cassandra, and others that did not have any lineage information. This configuration potentially caused inaccuracies in data management and could have affected the user experience. The functionality has been enhanced; the auto lineage option will now be disabled for connectors like MongoDB, Cassandra, and others that do not have any lineage information. 

  • In Administration > Connectors > NIFI Bridge, performance timing instrumentation was added to bridge crawling and profiling jobs. This feature captured and recorded the time taken for different stages of the processes, providing valuable insights into job efficiency and execution speed. The functionality in Configurations > NIFI Bridge is enhanced by adding performance timing instrumentation to bridge crawling and profiling jobs. 

  • In Administration > Connectors > ADL Connector > Profiling, Azure Data Lake with Azure Key Vault integration, users previously encountered an issue where they couldn't view data under the Data tab after profiling a file. The functionality has been enhanced in the ADL Connector > Profiling, Azure Data Lake with Azure Key Vault integration. Users can now access the profiled data under the Data tab for analysis and insights.

  • In the Administration > Connectors > ADM Connectors, the process of validating the connection with Admin APIs for Power BI and Tableau ensured seamless integration between the Admin APIs and these third-party platforms. The validation process has been enhanced for further improvement in seamless integration.

  • In the Administration > Connectors > ADL Connector > Profiling > Azure Data Lake with Azure Key Vault, users previously encountered an issue where data was not visible under the Data tab after profiling any file. The Data tab was not displaying the profiled data, hindering users from analyzing and gaining insights from the profiled information. This functionality has been enhanced, and users can now access the profiled data under the Data tab for analysis and insights.

  • In Administration > Connectors > RDAM, the functionality of creating Roles, groups, and Users in the remote system was not working when OvalEdge was set as the master in the connection settings. The functionality has been enhanced, and now, when OvalEdge is set as the master in the connection settings, Roles, groups, and Users are correctly created and synchronized in the remote system. 

  • In Administration > Connectors > "JSON Advance Job," an issue occurred where schema became inactive during the database recrawl as temp schema was not present in the remote system. The improvement involved making the temp schema always active during re-crawl, ensuring that codes remain functional and accurate.

  • In Administration > Connectors > Azure Data Factory (ADF), there was previously a challenge with transferring "Pipeline_Run_params" objects to OvalEdge in the  production environment. This functionality has been addressed and users can now transfer data functions without any problems.

  • In Administration > Connectors > API, specifically, when making a GET request to the endpoint /api/user/getUserList to retrieve all users, there was an issue with the response status. Instead of receiving the expected response, the API returned a status of 404 error. The functionality has been enhanced, and the system successfully implements the offset and limit parameters for the API - GET /api/user/getUserList, ensuring that all users are retrieved and displayed correctly in the system.

  • In Administration > Connectors > ADM, User Preferences, there was an "Edit" pencil icon next to the user name field, but users could not change the username using this icon. This created confusion as users may think they can modify the username, which is not the case. This functionality has been enhanced, and the "Edit" option has been moved to the 9 dots menu, making it clear that the username cannot be changed from this field.

  • In Administration > Connectors > RDAM, the absence of a module for tracking user access to data objects posed challenges in managing data access control and raised concerns about potential security vulnerabilities. This functionality has been enhanced and designed specifically to track user interactions with data objects and generate detailed reports displaying user privileges.

  • In Administration > Connectors, we tuned the Tableau APIs and applied filters to reduce hits on the remote server. This optimization addressed the issue of the Tableau server receiving millions of hits from the Tableau account used in OE. As a result, the Tableau server's performance improved significantly, handling a large number of workbooks, data sources, and folders for the Life Solutions Tableau site.

  • In Administration > Connectors, a significant enhancement has been made through the implementation of Credential Manager. This addition strengthens the security of client credentials, ensuring a secure system for managing connector credentials.

  • In Administration > Connectors > S3, the OvalEdge application previously encountered limitations in supporting all data types within Parquet files, leading to difficulties parsing columns and displaying them in the OvalEdge application. This enhancement has been implemented and now, the application lists and skips unsupported data columns in the job log, while displaying the supported columns in the Data Catalog across the Summary tab, Data tab, and Column Details tab.

  • The Oracle Service Cloud Relationships | Previously, the connector contained table columns with additional sub-column data that was not being fetched into OvalEdge, leading to missing data and row count discrepancies. To address this issue, sub-columns have now been incorporated under the respective table columns, ensuring that the crawled data objects now align with an accurate row count. Also, the Entity Relationships are now shown between columns and their associated sub-columns. As a result, users can gain a more comprehensive understanding of the data structure, with relationships clearly depicted in the entity relationship view.

  • In Administration > Connectors, OvalEdge has significantly improved its lineage visibility from S3 to AWS Glue to Redshift. To achieve this, AWS Glue crawler integration is introduced, offering the ability to crawl various data sources, including those containing CSV, Parquet, and AVRO files.

  • Previously, in AWS Glue ETL, lineage building relied solely on crawling glue crawlers and storing them as datasets. The classifiers used in remote glue crawlers were added directly to the source code. We have enhanced the current functionality by storing the entire remote crawler object in the source code and when the crawler source is an S3 File, the lineage is built between the source S3 File and the glue table created.

    It's important to note that several crucial steps must be executed before building a lineage,

    • Crawling and Cataloging, Crawl the connection and catalog the files used by the crawler. This step ensures that all data sources are properly identified and prepared for lineage tracking.

    • Profiling Files, To facilitate column lineage, profile the files and extract key information about their structure and content.

    Once these preparatory actions are completed, the comprehensive lineage from S3 to AWS Glue to Redshift is built.
  • In Administration > Job Workflow > Schedule, an additional scheduling option for monthly job execution, particularly for the Job Workflow page is provided. This enhancement allows selecting specific days of the month, like the last day of every month .

  • In Administration > Security > Connectors, users previously encountered an issue where they could view the status (success, partial success, or killed) of connectors, but there was no filter option available to specifically view connectors with a desired status. The functionality has been enhanced by implementing the filter option for status. Users can now view the status of connectors and have the ability to filter and see connectors with their desired status. 

  • In Administration > Security > RDAM system, the permissions tab previously exclusively displayed Role names, leaving out the associated Users' names. Our recent enhancement addresses this limitation by introducing a significant improvement. Now, within the permissions tab, you'll find a comprehensive view that includes both User and Role names. In the Administration > Audit Trials > Catalog, a new 'Descriptions' label has been introduced. Audit Trials now comprehensively record changes to Business and Technical descriptions at the object level, whether they are added manually or automatically through methods like 'Load Metadata,' API interactions, or Service Desk actions.

  • In Administration > Security, the "Edit Authorized Roles and Users" feature has been enhanced to provide greater clarity and accuracy during permissions editing. Users will now see both User and Role names, facilitating easier identification and association in the "Update Permissions" process. This enhancement leads to a more precise and informative representation of permissions within the OvalEdge platform.

  • In Administration > Advanced Jobs > Build Auto Lineage, the process of building lineage between SAP HANA and Redshift connectors has received significant enhancements. These improvements empower the system to effectively handle tables or columns with distinct names, particularly when the "/" character is present. During the lineage process, the system now automatically substitutes "/" with "_" to facilitate uninterrupted data tracing and comprehension between these two connectors. Users can now establish precise lineage connections even when names vary, ensuring efficient data management and comprehensive analysis capabilities.

  • In Administration > Advanced Jobs, functionality previously lacked the capability to replace the character '/' with '_' in table names while building lineage for tables that share the same name across different connections. This functionality has been enhanced, and users can now replace the character '/' with '_' in table names to enable seamless lineage building and distinction between tables with identical names. 

  • In Administration > Advanced Jobs > Build Auto Lineage, there was a job assigned to extract queries for lineage construction. The presence of duplicate queries had caused a performance issue, leading to an extended job execution time. This issue has been resolved, and the job currently extracts unique queries for lineage construction, thereby reducing the execution time.

  • In Administration > Advanced Jobs > Extract queries from source to build lineage, the issue of duplicate queries previously caused performance problems and prolonged job execution times. This functionality has been enhanced and the job now ensures that only unique queries are extracted. In the subsequent advanced job, build lineage from extracted queries, these unique queries are utilized for lineage construction, resulting in reduced execution times.

  • In Administration > Advanced Jobs > Download Powerbi Pbix files > Enhanced Power BI advanced job to export reports using parameters first, and if unsuccessful, it retries with report IDs and group IDs.

  • In Administration > Advanced Jobs > Download Powerbi Pbix files, an improvement was implemented to enhance the capability to execute a job for multiple individual workspaces by specifying the domain names with comma separation or leaving it empty to run the job for all workspaces.

  • In Administration > Advanced Jobs, "Download Powerbi Pbix files'' task, an enhancement has been implemented to handle scenarios where exporting reports may require multiple iterations. An additional parameter has been introduced, which allows users to specify whether failed iterations should be rerun by providing "true" along with the number of iterations, or if they should be ignored by specifying "false." 

  • In Administration > JSON Advance Job, users were previously getting duplicate views when executing a job. This occurs due to fetching views from both the Delta Lake and JSON. This error was resolved; the JSON Advance Job now prevents duplicate views. 

  • In Administration > System Settings > SAAS environment addresses the issue of downloading templates, data, and ADL profiling, which previously required different temporary paths. Users had to manually change these paths every time they performed these actions. To improve this, two separate path configurations have been implemented in the bridge. One path configuration is designated for temporary path downloads, and the other is for ADL profiling of files in the SAAS environment. 

  • In Administration > System Settings > Users & Roles, users could assign a single role to the ovaledge.tag.role configuration, but the system did not support adding multiple roles to the value. The current functionality has been enhanced and now users can add multiple roles to the ovaledge.tag.role configuration.

  • In Administration > System Settings > PK-FK relationships job feature > pkfk.relation.job.rowcount.check, the system now has the option to configure a matching row count check before building relationships. This allows users to validate the number of matching rows between the two columns before establishing connections, ensuring data quality and accuracy in the system.

  • In Administration > System Settings, SMTP validation has been improved by implementing OAuth for token generation. This enhancement eliminates the requirement for client ID and credentials.

Logs Backup

  • OvalEdge generates logs within the Tomcat environment to help diagnose issues when they occur. These logs are stored as "ovaledge.log" files in the "<Tomcat>\logs" folder, with each file limited to a maximum size of 10 MB. When a log file surpasses this size limit, it is automatically renamed to "ovaledge.log.1," and this process continues for up to 50 backup files.

    In log4j.properties, the default configuration of the "log4j.appender.logfile.MaxBackupIndex" key value was set at 50. It has now been decreased to 25, and customers have the flexibility to adjust this number based on their specific hard disk size requirements in their environment. This modification effectively reduces the number of log files generated and stored by half, leading to a 50% reduction in the necessary log storage space for each pod.

New Connectors 

  • CIFS
    The CIFS (Common Internet File System) connector is a specialized file connector designed to streamline the process of crawling files and folders. This connector leverages SMBJ, a powerful library for accessing files and resources over the Server Message Block (SMB) protocol. OvalEdge uses SMBJ protocols to connect to the data source, which allows users to crawl and profile data objects (folders, files, files data, etc.). No lineage is present for CIFS.
  • Network Drive Scan

    The Network Drive Scan is a file connector that shares similarities with the CIFS (Common Internet File System) connector. Like the CIFS connector, it utilizes SMBJ, a robust Java library, to effectively crawl files and folders.

  • GitHub File Connector

    The GitHub File Connector is a specialized repository file connector designed for streamlined file access and management within GitHub repositories.

  • Unity Catalog

    Databricks Unity Catalog is a leading solution for unified data and AI governance in lakehouse environments. It enables organizations to seamlessly manage data, models, notebooks, and more across cloud platforms. Users can securely access and collaborate on trusted assets, boosting productivity with AI. Unity Catalog offers centralized access control, auditing, lineage tracking, and data discovery for Azure Databricks workspaces. OvalEdge uses a JDBC server to connect and perform data crawling, profiling, query execution, and lineage building.

  • IBM Datastage

    IBM Datastage is a powerful Extract, Transform, Load (ETL) connector. It excels in seamlessly connecting various components and executing complex data integration jobs.

  • Cognos 10.2

    IBM Cognos Analytics is an On-Premises/Cloud connector that seamlessly integrates reporting, modeling, analysis, dashboards, storytelling, and event management capabilities to enable informed and impactful business decision-making through comprehensive data insights. 

  • AWS Secrets Manager Redesign

    The redesign of AWS Secrets Manager introduces a change in how secrets are structured and named. Now, secrets are generated by including both the connection name and the connection ID as part of their naming convention. This adjustment enhances the organization and management of secrets, providing a more structured and efficient approach to secret management within AWS Secrets Manager.

  • Oracle Service Cloud Redesign

    The Oracle Service Cloud Connector has been redesigned to streamline metadata access, data crawling, and profiling through REST APIs. This update enables users to retrieve essential table information, create parent and sub tables, establish primary and foreign key relationships, manage data movement, and conduct data profiling.

New System Settings 

The latest release introduces a set of new configurations that provide users with even greater control over the application's behavior. The newly added configurations are listed in the following Table.

System Setting Keys

Description

load.data.from.cache

This setting specifically pertains to the Azure Data Lake connector to control the visibility of file data and file columns when profiling is performed.

Parameters,

If set to "true", this option enables the display of file data and file columns.

If set to "False" results in the exclusion of file data and file columns.

bridge.temppath

This configuration is used to specify the file system path where temporary or intermediate files related to the bridge functionality are stored.

Parameters,

Input the tempath in the field provided.

profile.all.files.enable

This configuration allows the user to enable or disable the "Catalog and Profile All Files" option in the Data Catalog > Folders, accessible through the nine dots icon.

Parameters,

If set to "true", the option gets displayed in the Data Catalog > Folders, under the nine dots menu.

If set to "False", the option is disabled from the Data Catalog > Folders, under the nine dots menu.

bridge.server.host

This enables the configuration of the host or URL of the bridge server, allowing the system to establish connections and communication with the specified server.

Parameters,

Input the bridge server's URL into the provided field.

bridge.server.port

This configuration enables the user to specify the port number associated with the bridge server to communicate with the bridge server through the defined port.

Parameters,

Enter the bridge server's port number into the provided field.

pkfk.relation.job.rowcount.check

This configuration allows the user to specify the minimum row count that a table must meet to qualify as a primary key. When set to a value such as 100, primary key columns will only be identified for those rows that equal or exceed this threshold.

Parameters,

The default value for this criterion is set at 100.

Enter the value in the field provided.

pkfk.relation.job.max.fk.count

This configuration specifies a limitation on the number of foreign keys that must be identified to calculate relationships. When set to a value such as 20, relationships will only be constructed if the count of identified foreign keys is equal to or less than 20. If, for example, 25 foreign keys are identified, relationships will not be established.

Parameters,

Enter the value in the field provided.

relationjob.min.match.count.from.topvalues

This setting defines the minimum number of matches needed between the top values of two columns to qualify for a potential relationship. For instance, if set to 10, at least 10 matches among the top values of the two columns are required to initiate a relationship. If this threshold is not met, the columns won't be considered for relationship building.

Parameters,

Enter the value in the field provided.

show.draft.terms.to.viewers

This setting enables a toggle button in the Business Glossary to toggle and display either Published/All Terms (Published and Draft) terms in the Tree View and List View of the Business Glossary.

Parameters,

The default value is set to 'true'.

If set to "true", it allows viewers to switch the toggle between published and All terms (Published and Draft). In the list view, the term status filter can be used to filter out draft and published terms.

If set to "false" it restricts viewers from seeing draft terms in both tree and list views.

allreports.pagination.limit

This configuration is used to specify the number of reports displayed on a single page within the Dashboard > All Reports.

Parameters,

Input your desired value into the designated field. For instance, if set to 20, only 20 reports will appear on a single page for improved navigation and readability.

tag.multiple.parent.hierarchy

This configuration setting provides users with the flexibility to create tags with the same name under different root tags.

Parameters,

If set to "true", Users can create tags with the same names under different root tags.

If set to "false", It mandates unique tag names, disallowing the use of identical tags or duplicates.

servicedesk.request.bulk.approval.limit

This configuration is used to define the maximum number of tickets that can be approved in bulk using the "Approve/Reject" button located within the Service Desk's "Waiting Approval" tab.

Parameters,

Default Value, The default maximum limit is set at 50 tickets.

Maximum Value, The maximum allowable limit is restricted to 100 tickets.

Input Field, Users are expected to enter their preferred value within the designated field.

microstrategy.custom.fields

This configuration is used to activate or deactivate the custom fields 'Owner name' and 'Report created date' for the Microstrategy connector. When set to 'true,' they are enabled. The value can be either 'true' or 'false'.

Parameters,

If set to "true", it enables the custom fields 'Owner name' and 'Report created date' for the Microstrategy connector.

If set to "false", the custom fields are deactivated.

entityrelationip.max.limit

This is used to configure the maximum limit for the number of nodes displayed in the entity relationship graphs within the Data Catalog. Maximum limit - 100.

Parameters,

The default setting is 50, which means that initially, up to 50 nodes are shown in these graphs.

Enter the value in the field provided.

smtp.oauth2.tenantid

Enter Client ID for OAuth Token Validation and SMTP Server Authentication.

Parameters,

By default, this setting is empty.

Enter a Client ID in the field provided to validate an OAuth token and set up SMTP Server Authentication.

smtp.oauth2.clientid

Enter Client ID for OAuth Token Validation and SMTP Server Authentication.

Parameters,

By default, this setting is empty.

Enter a Client ID in the field provided to validate an OAuth token and set up SMTP Server Authentication.

smtp.oauth2.clientsecret

Enter Client Secret for OAuth Token Validation and SMTP Server Authentication.

Parameters,

By default, this setting is empty.

Enter a Client Secret in the field provided to validate an OAuth token and set up SMTP Server Authentication.

smtp.oauth2.cloud.name

Enter Cloud Name for OAuth Token Validation and SMTP Server Authentication.

Parameters,

By default, this setting is empty.

Enter a Cloud Name in the field provided to validate an OAuth token and set up SMTP Server Authentication.

oe.store.source.data

This configuration allows the user to determine whether the application should store source data or not.

Parameters,

The default setting is set to "true," which means the application will store the source data.

If set to "false," the application will not store the source data.

Additional Information,

If you choose "false," please be aware of the following implications,

AI Recommendations, Recommendations will be solely based on object names.

Data Quality Control Center, Remediation tools will not be available.

Data Quality Failed Values, You will not be able to view failed values.

Governed Data Query, Results of the governed data query will not be visible.

Query Sheet, Real-time data will be accessible but with some latency.

It's important to note that selecting "false" will result in the removal of all stored source data from the system, and this data will be irrecoverable.

New Advanced Jobs

The latest release introduces a set of new advanced jobs that provide users with even greater control over the feature's behavior. The newly added advanced jobs are listed in the following Table.

Job Name 

Description

DbtFileParseAndUpdatDescription

This job allows the user to copy and update the business description from the DBT connector (Manifest JSON file) to the matching data objects in the specified source connector. Additionally, this job creates new custom fields in the source connector based on the test custom fields provided in the DBT connector.

Attributes,

Source Connection Id

DBT Connection Id

Build Lineage from Python JSON config Files

This job allows the user to build lineage on manually added JSON files which are connections, flows, and objects files in the Config folder. This is achieved by identifying matching source and target names within flows and objects, facilitating the lineage-building process.

Attributes,

RootFilePath

Source ConnectionInfo id

Destination ConnectionInfo id

ALERT FILE CHECK VALIDATION

This advanced job is developed for a client to validate their HDFS file system. It's intended to confirm whether the specified file is present in the designated folder for the specified time frame.

Attributes,

HDFS connection ID and the file path.

Update Schedule Next Execution Date Time

The purpose of this job is to update the next execution date & time in the database > "Schedule" table > "Next Execution date" column. This gets updated only when the "Next Execution date" field is empty or null. It is important to note that it is mandatory to run this advanced job for the first time when the database is new or updated.

Attributes not needed.

PopulateCCFValuesinConfigureViewService

This advanced job enables the display of custom code fields within the Data Catalog's "Configure View" pop-up settings window.

Attributes not needed.

Delete duplicate codes

The purpose of this job is to delete the duplicated drop-down options for code custom fields. The system does not permit the creation of duplicate drop-down options, executing this job effectively removes any existing duplicates.

Attributes not needed.

Talend Java Variables

The purpose of this job is to build a lineage by taking the talend connection ID and query connection ID to replace the java global variables with actual values.

Attributes,

Talend connection id

Query connection id

DQ Score calculation for 61

The primary objective of this job is to ensure that users migrating from the previous version to the 6.1.1 release can access the most up-to-date Data Quality (DQ) score. This is imperative due to recent adjustments made in the score calculation process.

Attributes not needed.

Snowflake Tag Syncing Service

This job is created for a client (GreenSky) to synchronize tags between Snowflake and OvalEdge bidirectionally. If left blank, the initial synchronization will occur from Snowflake to OvalEdge, followed by the propagation of OvalEdge changes to Snowflake.

Attributes,

Connection Id (Attr 1) - Synchronization is limited to the metadata within this connection.

Global Domain (Attr 2) - A global domain will be established using the provided name (e.g., GOVERNANCE).

Category (Attr 3) - Specify the schema name (e.g., GOVERNANCE_DEV) from which the tags should be retrieved.

Sync Type (Attr 4) - Indicate whether you want to synchronize Snowflake to OvalEdge (SF2OE) or vice versa (OE2SF).

Snowflake Tag Syncing Service V2

This job is created for a client (VSP) to synchronize tags between Snowflake and OvalEdge bidirectionally. If left blank, the initial synchronization will occur from Snowflake to OvalEdge, followed by the propagation of OvalEdge changes to Snowflake.

Attributes,

Connection Id (Attr 1) - Synchronization is limited to the metadata within this connection.

Global Domain (Attr 2) - A global domain will be established using the provided name (e.g., GOVERNANCE).

Category (Attr 3) - Specify the schema name (e.g., GOVERNANCE_DEV) from which the tags should be retrieved.

Sync Type (Attr 4) - Indicate whether you want to synchronize Snowflake to OvalEdge (SF2OE) or vice versa (OE2SF).

Snowflake Delete tags Service

The purpose of this job is to delete all of the tags from Snowflake.

Attributes,

Attribute 1, List of Snowflake connection IDs separated by commas

Attribute 2, List of Snowflake Schema IDs separated by commas

Update Tag Hierarchy

This advanced job is executed to generate distinct identifiers or IDs for identical tag names. The job ensures proper functioning of the system configuration setting > key name > tag.multiple.parent.hierarchy which grants users the capability to create tags with identical names under different root tags.

No attributes are needed.

PROCESSRELATIONNAMETYPE

This job helps to process relation-type names in the glossary_relation table.

Term Masking Migration Job

Starting from Release6.1, OvalEdge has been upgraded to offer improved support for multi-masking schemes specifically at the Business Glossary Term level. This job migrates the existing masking scheme associated with the term to accommodate and support multiple masking functions.

No Attributes are required for this job to run.

Note, If you do not use the masking functionality, there's no need to execute this job.


Bugs Fixes

The following bugs have been fixed in this release, and the application is working as expected.

Global Search 

  • In the Global Search with Left Panel Filter, users previously encountered an issue where attempting to select tags resulted in an undefined response for the drop-down options. This problem has been rectified, and now, users can successfully view and select tags from the drop-down list. 

Tags

  • In “Tags and Associations”, after users applied a filter for connector names or DAGs (Data Asset Groups), the applied filter was previously not displayed and caused difficulty in identifying applied filters and filtered content. The issue has been resolved, and users should now be able to see the applied filter displayed above connector names or DAGs.

  • In “Tags” > “Data Stories”, the naming convention for the associated object in data stories for Viewers was incorrect. This resulted in Viewers only being able to see data stories from the “Tags” page when access was explicitly provided. The issue has been resolved, and users can now only access data stories tables when access is provided explicitly on the security page.

  • In the “Tags” section > “Tree” view, the data count was previously inaccurate and could switch to displaying data for a different tag when a specific tag was selected. This issue also affected other sections like Tables, Table Columns, Files, File Columns, and Codes, where the associated data count was unstable. The issue has been resolved, and users can now accurately view data counts for selected tags, and the data counts switching to different tags have been working properly. This improvement also applies to other sections like Tables, Table Columns, Files, File Columns, and Codes.

  • In "Tags > Associations > Table Column Tab", there was previously an issue where the filter did not work as expected. Users attempting to use the filter found that clicking on it did not display the list of items, thereby hindering their ability to effectively filter and view the data they needed in the Table Column Tab. The issue has been resolved, and users can now use the filter in the "Tags > Associations > Table Column Tab" successfully. Clicking on the filter displays the list of items as intended, allowing users to effectively filter and view the data they need in the Table Column Tab.

  • In Tags, the OvalEdge application previously permitted the use of commas in tag names when creating new tags but did not allow commas when modifying the names of existing tags. This issue has been resolved, and the OvalEdge application now uniformly disallows the usage of commas in tag names, both during tag creation and when modifying the names of existing tags.

  • In Tags, when a child tag has multiple parent tags, attempting to filter by a parent tag would previously result in incorrect search results. This issue has been resolved and users can now filter by parent tags with correct search results.

  • In Tags, users previously encountered a problem when they attempted to add a parent tag under child tags using the "Edit tag" popup. Specifically, when they tried to create a new parent tag and then add it under child tags, they were unable to see the newly created parent tag. The issue has been resolved, and users can now successfully add a parent tag under child tags using the "Edit tag" popup. They should also be able to create a new parent tag and see it when attempting to add it under child tags.

  • In Tags, users were previously unable to see DAGs on the homepage of the system. Whenever they created a DAG, it automatically got categorized under a particular tag. Essentially, all newly created DAGs were added as child objects under the "chennai123" parent tag. The issue has been resolved, and users can now see their DAGs on the homepage of the system as expected, and the automatic categorization of newly created DAGs under the "chennai123" tag has been corrected.

  • In Tags, when users attempted to filter the list of tags based on the "Created By" criteria, they encountered a problem with specific search terms. Specifically, when they searched for "First name” or “Last name” individually, the results were displayed as expected. When users searched for "first name last name," combining both terms, the results did not show up as they should have. The issue has been resolved, and users can now successfully filter the list of tags based on the "Created By" criteria, including when they search for "first name last name" or any other combination of search terms.

  • In Tags >  Tree view, users previously experienced significant delays in loading the list of tags, making the process time-consuming. This issue has been addressed and the load time has been significantly reduced for improved user experience. 

  • In Tags, users encountered an issue where attempting to associate a default project with a table linked to a specific tag was unsuccessful. The "Add to Default Project" icon was non-functional, preventing users from including the tag-associated table in the project. This issue has been resolved. Users can now add tag-associated data objects to their default projects, enhancing project management and data organization capabilities.

Data Catalog

  • In Data Catalog, the row count, which should have indicated the number of records in a file, was not displayed previously even when data was present in the data tab. The issue has been resolved. Users can now view the accurate row count after profiling.

  • In Data Catalog, an error was encountered previously when attempting to un-catalog files and folders using the "9 dots" menu. The issue has been resolved, and users can now un-catalog files and folders without encountering the error.

  • In Data Catalog, users previously observed an issue where certain tables displayed an "unknown column name" in their column details. The issue has been resolved, users no longer encounter the "unknown column name" problem when inspecting column details in the Data Catalog for tables.

  • In Data Catalog > Column details, users previously reported a problem where the column titles in the Data Catalog did not match those displayed in the Tableau Server dashboard. The issue has been resolved, and users now see consistent column titles in the Data Catalog, matching those displayed in the Tableau Server dashboard.

  • In Data Catalog, duplicate options for code custom fields in Table Column settings were previously caused by users creating global code custom fields at the Table Column level with specific options (e.g., 1, 2, 3). This issue resulted in duplicate entries not only in the Code Custom Fields section but also in the Data Catalog at the Table Column level. The issue has been resolved. Users should no longer encounter duplicate options for code custom fields in Table Column settings. The problem has been fixed both in the Code Custom Fields section and in the Data Catalog at the Table Column level.

  • In Data Catalog > Reports, users previously encountered an error when clicking on the text associated with an object. Specifically, when they navigated to the RFW (Reporting Framework) Reports, selected any report, and clicked on the object text, an error occurred. The issue has been resolved. Users can now click on the text associated with an object in the Reports section of the Data Catalog without encountering any errors.

  • In Data Catalog, the function to select a report, change its associated governance role, and modify a role (specifically, changing "steward" to "QA") was not working as expected. The issue has been resolved, and users can now successfully change governance roles for reports in the Data Catalog application, including updating the "steward" role to "QA," and these changes are being applied as expected.

  • In Data Catalog, previously, when users attempted to download data from the bottom of the Tables summary page, they encountered a problem. Specifically, the technical description information was not visible in the downloaded CSV file. The issue has been resolved, and users should now be able to download data from the bottom of the Tables summary page, and the technical description information should be visible in the downloaded CSV file.

  • In Data Catalog, previously when users modified the label within governance roles and configured a view, they subsequently used that view. When attempting to download data within the Data Catalog using the "Simple Download" option, the downloaded file did not include the labels as expected. The issue has been resolved, and users can now modify labels within governance roles, configure views, and successfully download data within the Data Catalog using the "Simple Download" option with the expected inclusion of labels.

  • In Reporting Framework, a specific challenge previously arose when dealing with custom-created reports and their associated bar charts. The problem is apparent when there is a limited number of associated data objects, resulting in the bar charts within these reports not being visible as expected. The issue has been resolved, and users should now be able to see the bar charts in custom-created reports, even when there are fewer associated data objects.

  • In Data Catalog, the "Open in new tab" link, which was supposed to allow users to open a code-related item in a new browser tab, was not functioning as expected. The issue has been resolved, and the "Open in new tab" link in the Data Catalog now functions as expected, allowing users to open code-related items in a new browser tab without any problems.

  • In Data Catalog, "Add to Impact Analysis" from the 9 dots menu was not clickable, after adding the Chrome plugin, when users went to the Data Catalog and attempted to add a table to an impact analysis that had been created, the option to do so was unresponsive or non-functional. The issue has been resolved, and users can now click on "Add to Impact Analysis" from the 9 dots menu, and the option to add tables to an impact analysis should be responsive and functional, even after adding the Chrome plugin.

  • In Data Catalog, duplicate API calls on the codes page resulted in two invocations of the same API when accessing the page. The issue has been resolved, meaning that the duplicate API call on the codes page has been addressed, and users no longer experience two invocations of the same API when accessing the page.

  •  In Data Catalog> List view, the 4th governance role occasionally did not appear in the "Update Governance Roles" pop-up. The issue has been resolved, and users can now consistently see the 4th governance role in the "Update Governance Roles" pop-up.

  • In Data Catalog's Custom View, when users created a custom view, changed the display row count, and then accessed the view's configuration again, it defaulted to displaying the saved raw value of the last created user view, which was added during creation time. This occurred for all views, regardless of the specific row count users set for each view. The issue has been resolved, and users can now create custom views in the Data Catalog, change the display row count, and when they access the view's configuration again, it correctly displays the row count.

  • In Data Catalog's Custom View, previously, changing the display row count for a created view didn't persist when configuring the view again. Instead, it used to default to displaying the saved raw value of the last created user view, which was set during its initial creation, rather than maintaining the last configured page size. This issue has been resolved and users can see the last configured page size in the configure view.

  • In Data Catalog > Tables > Summary, when users previously updated a text custom field, it was saved as HTML code. Downloading the data containing this field retrieved HTML code instead of the intended text value. This problem was noticed when updating text custom fields for terms in the Business Glossary. The issue has been resolved, and users can now update and download the intended text value as expected.

  • In Data Catalog > Table > Summary, users previously could not access the associated ticket using the service request count feature, even after the ticket had been approved. The issue has been resolved and now users can access the associated tickets from the tables service request count feature without any errors. 

  • In Data Catalog, objects were duplicated with identical IDs, which led to data catalog integrity problems. This issue has been resolved, with unique IDs assigned, and duplicate objects removed.

  • In Data Catalog > Codes, query pages were not loading or were experiencing significant delays, leading to timeouts and rendering the feature unusable. This issue has been resolved and users can now load the query pages in a timely manner. 

  • In Data Catalog's Viewer module, two issues were identified, After users added objects to the access cart, the objects did not appear on the access cart page; and the access cart incorrectly displayed "Add Schemas" for all connector types, instead of displaying appropriate terms like "Add Tables" or "Add Folders" based on the connector type. Both issues have been resolved. Now, objects added to the access cart correctly appear on the access cart page. Additionally, the access cart displays appropriate terms like "Add Tables" for connectors like Snowflake, Redshift, and Databricks, and "Add Folders" for S3, based on the connector type. 

  • In Data Catalog, a challenge was previously encountered where relationships could not be built for specific schemas in the attribute during the "Adv Job, Discover Primary and Foreign Key Recommendations Auto" process. Instead, relationships were being created for all schemas in the connection, leading to inaccuracies in data analysis and management tasks. The issue has been resolved. The process now correctly builds relationships only for the given schemas, ensuring accurate data analysis and management in the data catalog.

  • In Data Catalog, when data objects are added to the access cart, they were successfully added but did not previously show up on the access cart page. This issue has been resolved and the data objects are now showing up in the access cart. 

  • In Data Catalog, the system was previously unable to build relationships for specific schemas in the attribute. This was due to incorrect labels, configuration, and validation handling. This issue has been resolved and the relationships are built for specific schemas in the attribute. 

  • In Data Catalog > Databases, the advanced search operator "greater than" was not working correctly for the "Row count" column. This issue has been resolved, and users are now able to get accurate search results when they apply the "greater than" operator to the row count column.

  • In Data Catalog, an issue displayed assigned terms in the tags column for both reports and report columns. This issue has been resolved, with the tags column showing assigned terms only for reports.

  • In Data Catalog > Table, a profiling status column was previously missing. This issue has been resolved, and the users can now easily see the profiling status of tables in the Data Catalog.

  • In Data Catalog, the SAAS application was displaying duplicate schemas and tables. This duplication was visible in the data catalog, where multiple instances of the same objects were being displayed. The issue has been resolved, and the data catalog now displays a single instance of each schema and table.

  • In Data Catalog, blank and empty values were not supported, which prevented users from creating objects with these values. The issue has been resolved and users can now create objects with blank and empty values.

  • In Data Catalog, a request has been raised to introduce a new download option for business descriptions. This feature aims to provide users with the capability to download specific business descriptions selectively, rather than downloading all business descriptions at once. The load metadata download option in the data catalog allows users to update fields based on their requirements.

  • The Data Catalog, previously had an issue where the "Update Roles" popup would persistently display even after users moved to another module without closing the popup. This behavior caused disruption to the user's workflow and led to inconvenience. The issue is resolved in the Data Catalog. The "Update Roles" popup no longer persists across other modules. Now, when users open the popup to update roles and navigate to another module, the popup automatically closes.

  • In Data Catalog > Reports, users previously faced a 504 Error while trying to download data, particularly the list of documented columns. This issue has been resolved, and now, when a user clicks on the download plugin, it initiates a job. Once the job is completed, a clickable link is generated, allowing users to download the file to their local devices as expected.

  • In Data Catalog > Configure View, Codes, there was a performance problem associated with the filter icon when dealing with governance roles, which included Custodian, Steward, and Owner. The issue has been resolved, and there are no longer any performance problems when dealing with the filter icon for governance roles.

  • In Data Catalog, users encountered difficulties accessing the Reports page in the Data Catalog. The problem has been resolved, and users can now access the Reports page without encountering any errors.

  • In Data Catalog, When users applied a certification type to the same data object multiple times in the Data Catalog > Files, they would encounter an error message stating "data object has been designated as certified." This issue has now been addressed, and instead of an error message, a warning message is displayed with the same information, "data object has been designated as certified”.

  • In Data Catalog, the scrolling didn't function correctly on summary pages, negatively impacting the user experience. This problem has been resolved, and scrolling now works as intended on summary pages.

Business Glossary

  • In Business Glossary, the "String_Alphanumeric" scheme, designed to handle alphanumeric strings (combinations of letters and numbers), was not working as expected. The issue has been resolved, and now the "String_Alphanumeric" scheme in the Business Glossary is functioning as expected, effectively handling alphanumeric strings.

  • In Business Glossary, users were not receiving a popup message when adding tags to a term. The issue has been resolved and users now receive a popup message when adding tags to terms.

  • In Business Glossary, users previously had inconsistent permissions for publishing and drafting terms. They could perform these actions in the list view but encountered access issues when trying to do the same within individual term pages. The issue has been resolved, and users now have consistent permissions for publishing and drafting terms in both the list view and individual term pages.

  • In Business Glossary > List View, after creating a term, the fields "Approval by" and "Approval Date" were not populated correctly when users attempted to view the custom view of the term. This issue has been resolved, and users can now see the "Approval by" and "Approval Date" only after publishing the term.

  • In Business Glossary > List view > Terms, users were encountering an issue where duplicate terms were being added when attempting to establish related objects. The issue has been resolved and users can no longer add duplicate terms when establishing related objects.

  • In Business Glossary, while working with security table columns, there was a requirement for a new Boolean data type to facilitate content masking. The issue has been resolved, and users can now include the need for a Boolean Data type. This update is particularly important because it introduces a new masking scheme specifically designed for Boolean data types, where data masking is done based on masking schemes.

  • In Business Glossary > Summary page, clicking one checkbox in the "Manage Data Association" section erroneously checked all other checkboxes instead of masking them. A refresh was required to display the correct options. The issue has been resolved such that clicking one checkbox in the Business Glossary's Summary Page will no longer check all other checkboxes incorrectly. 

  • In Business Glossary, "See More" was displayed for related objects even when there were no more objects to show. Additionally, the behavior of relating terms to objects was inconsistent, sometimes showing unexpected relationships when editing. These issues have been resolved, and in the Business Glossary, "See More" will no longer be displayed when there are no more related objects to show. 

  • In Business Glossary, when users clicked on any term and related a report column to it, clicking on "report columns" resulted in an error. The issue has been resolved, and users can now click on related report columns as expected.

  • In Business Glossary, users encountered several issues with term relationships. The count of related terms was inaccurate, even when there were more than 30 related terms associated with a particular term. They were unable to access additional related terms beyond the initial set. The process of removing terms did not function as expected. The issue has been resolved, and users can now see the accurate counts of related terms, even when there are more than 30 related terms associated with a particular term. Users can also access additional related terms beyond the initial set, and the process of removing terms is working properly. 

  • In Business Glossary > List View, when users added objects in the 'Associated Data' section for a term and then deleted objects from the second page, it displayed an empty page instead of returning to the first page. The issue has been resolved, and users can now see the associated data redirecting to the first page.

  • In Business Glossary, the Load meta from files previously did not allow managing Governance Roles in bulk. This issue has been resolved and users can now manage Governance Roles in bulk by specifically selecting checkboxes for Governance Roles under Terms. 

  • In Business Glossary, some existing tags were not visible to users when attempting to apply them to terms, resulting in incorrect tagging. The issue has been resolved, ensuring that terms are now properly tagged and the tagging system functions as intended.

  • In Business Glossary, Admin users were initially unable to download Business Glossary templates. The issue has been resolved, and Admin users can now easily access and download the templates from the Business Glossary.

  • In Business Glossary, users were unable to update the Governance role for a term using the modify action when loading metadata from files using the data template. This issue has been resolved and users can now update the Governance role using the data template.

  • In Business Glossary, upon creating a term in the Configure View, all values were visible except for the Critical Configuration Field (CCF) values. This occurred because the CCF values were not being displayed in the Configure View. The issue has been resolved and the CCF values are now correctly displayed in the Configure View, ensuring comprehensive visibility of all relevant term information.

  • In Business Glossary, Previously, when a term with a masking policy was associated with data objects, the data remained visible despite applying a masking policy for the term. Now, this issue is resolved and the term masking is applied to the data object as expected.

  • In Business Glossary > Term Summary section, users encountered an issue when attempting to add a tag to a term. An error message, "Something went wrong," appeared, preventing users from successfully assigning tags to the selected terms. This issue has been resolved. Users can now effortlessly assign tags to selected terms without encountering any issues.

Projects

  • In Projects > List View, when users added 20 objects to the project, it worked as expected. However, when users attempted to add an additional 20 objects to the same project, it was not possible to add them. The issue has been resolved, and users can now add objects as required to the projects.

  • In "Projects,", after selecting an object and viewing its details, the tab did not close automatically, requiring users to refresh the page manually to close the tab. The issue has been resolved. Now, in "Projects," when users select an object and view its details, the tab closes automatically as expected, improving data management and navigation efficiency.

  • In Projects, the project count was not updating accurately after object deletions. This issue has been resolved and the project count is shown accurately even after the object deletion.

Dashboards

  • In Dashboards, the count of "All Reports" displayed didn't match the count in the data catalog under Reports. The issue has been resolved, and now the count of "All Reports" displayed in Dashboards matches the count in the data catalog under "Reports.

  • In Dashboards, users attempting to open  "Dashboard > All Reports",  previously encountered a "Problem Occurred" error message. This issue has been resolved, and users can now seamlessly view all the reports without any disruptions.

Service Desk 

  • In Service Desk, all new term creation requests were previously automatically directed to domain owners/stewards, causing an overload of requests. The issue has been resolved, and users can now specify the "Category" and, if applicable, "Subcategory" of the term in the request form. This ensures that new term creation requests are correctly routed to the appropriate stewards/owners, reducing the overload of requests sent to domain owners/stewards.

  • In Service Desk, the custom service request templates and their approval process were the focus of improvement. Particularly, when a custom service request template had an expiration date set, it was not prevented from being approved once it had crossed that expiration time. Custom service requests with expired templates could still be approved. The issue has been resolved and now, when a custom service request template has an expiration date set, it is correctly prevented from being approved once it has crossed that expiration time, ensuring proper functionality and compliance with the intended process.

  • In Service Desk, notifications for service requests were not consistently displaying the correct label for the requestor previously. Instead of consistently showing the requestor's identity, these notifications sometimes displayed various users' names or even the admin's name. The problem also affected system alerts. The issue has been resolved. Users now consistently see the correct label for the requestor in notifications for service requests in the Service Desk. The problem affecting system alerts has also been addressed.

  • In ServiceDesk > SLA, even after an SLA had expired, the system previously continued to send notifications to the SLA user regarding the associated ticket. Furthermore, after a ticket had expired in the Service Desk, the "Approve" and "Reject" options were expected to be disabled, but they remained active despite the ticket's expiration. The issue has been resolved and indicates users should no longer receive notifications for expired SLAs, and the "Approve" and "Reject" options should now be correctly disabled after a ticket's expiration in the Service Desk.

  • In Service Desk system, users encountered a situation where, after clicking on a service desk ticket, they were presented with a blank page instead of the expected ticket details or information. The issue has been resolved, and users no longer encounter a blank page when clicking on a service desk ticket. Users can now access the ticket details and information as expected.

  • In Service Desk, there was an issue with the system failing to send email notifications when a service request was rejected. The issue has been resolved, and users can now receive email notifications when a service request is rejected in the Service Desk.

  • In Service Desk, SLA (Service Level Agreement) reminders previously occurred when a service request (SR) had two approvers, and the first approver rejected the request. Members of the team assigned as the second approver continuously received alerts to approve the request, even though it had already been rejected by the first approver.  The issue has been resolved, and users will no longer experience SLA reminders for service requests with two approvers when the first approver rejects the request. The problem of continuous alerts to the second approver after rejection by the first approver has been fixed.

  • In Service Desk, users previously received excessive email notifications, both before and after ticket expiration, despite SLA settings. Approved service requests also caused email overload. The issue has been resolved, and users no longer receive multiple email notifications simultaneously. SLA notifications are sent according to the configured 1-hour interval, and approved service requests do not generate excessive email notifications, ensuring a more streamlined email experience.

  • In Service Desk, when users previously created a service ticket with a low priority, the corresponding status in Jira consistently displayed as medium, irrespective of the ticket's actual priority level. This inconsistency in status reporting between the two systems affected the accuracy of priority tracking and caused confusion. The issue has been resolved, and users no longer encounter the problem of the Jira status erroneously displaying as medium. The status reporting between the two systems now accurately reflects the priority level, improving tracking and clarity.

  • In Service Desk, there was previously an issue where changes made by an approver to a ticket were not saved if the ticket was rejected. This caused data loss as the changes would be lost once the page was refreshed. The issue has been resolved now. Changes made by an approver in a ticket are now saved even if the ticket is rejected. After editing the ticket and rejecting it, the changes will be retained when the approver returns to the ticket.

  • In Service Desk, tables generated were automatically deleted upon crawling the specific schema. This issue has been resolved, and now the tables created via the service desk remain intact even after the crawling process.

  • In Service Desk, when raising a "Suggest a Term" service request, there was previously an issue where the 'Category' field in the user interface did not update when assigning the category-level steward to the service request. This issue has been resolved and the specified category is displayed for the term suggested.

Data Quality

  • In the Data Quality > DQR (Data Quality Rules) module, users previously could not view the associations in the Association tab, even after executing the code, uploading the file, and having a successful job run. The issue has been resolved, and now users can see the related associations in the associations tab in DQR. 

  • In the Data Quality > DQR, DQ Execution Details were missing for old rules, while they were available for new rules. The issue has been resolved, and users can now see the DQ Execution Details for both old and new rules, ensuring consistent access to this information.

  • In the Data Quality Scores Dashboard, the search functionality was not working as expected when users tried to search for specific columns. The issue has been resolved, and the search functionality within Data Quality Scores now works as expected, allowing users to effectively search for and locate specific columns within the dataset. 

  • In the Data Catalog, there were concerns about missing business descriptions for tables in the new production environment. These descriptions were previously present but had disappeared. The issue has been resolved. The missing business descriptions for the tables in the new production environment have been restored, and the descriptions are now visible.

  • In Data Quality (DQ) Rules, the user previously faced an issue with the import file, as the schedule was incorrect. The user's intention was to schedule the DQ Rule to run on a specific day and time. After importing the file, the schedule only displayed the time without specifying the day of the week. Consequently, the rule did not execute as intended on the desired day. The issue has been resolved, and now users can import the file and accurately specify the day of the week for scheduling the DQ Rule.

  • When creating and running a Data Quality rule in the OE Application, adding contacts previously resulted in the generation of three alerts for both successful and failed rule executions. This redundancy in alerts needed rectification. This issue has been resolved and the alerting system has been streamlined. 

  • In Data Quality > Data Quality Rules, users encountered an issue where adding Data Quality Contacts for email notifications triggered multiple alerts for each rule, regardless of its outcome (success or failure). This led to redundant alerts that were overwhelming and confusing for users. The issue has been resolved, and now users receive only email notifications indicating the rule's status (Passed or Failed).

  • In Data Quality > DQR, there was an issue where users couldn't view the scheduled action. The issue has been resolved, and users can now view and access the scheduled action in the DQR (Data Quality Rule) module. They can click on the schedule toggle button to see the schedule available in the schedule module with the respective DQR name.

My Resources

  • In My Resources > My Profile, users previously encountered an error while attempting to update their phone numbers. The issue has been resolved, and now users can save their phone numbers without encountering any errors.

  • In My Resources > My Watchlist, email notifications for metadata changes were not functioning as expected. Although inbox notifications within the application were working, email notifications were not being sent when metadata changes occurred, causing inconvenience to users who rely on email alerts for updates. The issue has been resolved, and users can now receive email notifications for metadata changes.

  • In My Resources > My Watchlist, the count for table columns was missing, and there was an alignment issue with tables. The issue has been resolved, and the object count for table columns displays correctly when hovering over them, and the alignment with tables is working as expected. 

  • In My Resources > Application's text boxes, there was an issue where the "Decrease Indent" and "Increase Indent" functionalities were switched, resulting in the icons being paired with the wrong words. The issue has been resolved. Now, in the application's text boxes, the "Decrease Indent" and "Increase Indent" functionalities are correctly aligned, and the icons are paired with the appropriate words.

File Manager

  • In File Manager, when attempting to open the File Manager, users previously encountered an internal server error. The issue has been resolved, and users can now open the File Manager without encountering an internal server error. 

  • In File Manager, when users accessed and clicked on the ADL (Azure Data Lake) Connector, and then proceeded to click on 'View Data' or 'Download Files' from the 9 dots menu option, they previously encountered an error. The issue has been resolved and users can now view the data and download files for the ADL Connector as required.

  • In File Manager, when users attempted to open the File Manager previously, they encountered 500 errors. This problem has been successfully resolved, and users can now access and view the File Manager page without any issues.

Query Sheet

  • In Query Sheet, the ‘join’ operation was not functioning as expected. Although there was common data present in both of the tables involved in the join, the results were not being displayed as they should have been. The issue has been resolved; users can now perform join operations in the Query Sheet, and the results are displayed correctly even when there is common data present in both of the tables involved in the join. 

  • In Query Sheet, when users previously selected a connector and then chose a schema, the selected connector's name should have been displayed when hovered over. It continued to display 'Select Connector' instead. The issue has been resolved, and users can now see the selected connector name when they hover over the connector's name.

  • In Query Sheet, while querying a table in Azure Synapse, users were previously encountering an error. The error resulted in the query operation failing and not yielding the desired results. The issue has been resolved and users can now see the desired results.

  • In Query Sheet, users faced an issue where, even after selecting the connector, schema, and table, they were unable to execute queries. Instead, they received an error message stating, "Getting error in query execution, Please check the query or job logs." This problem has now been rectified, and users can successfully execute their queries without encountering any errors.

  • In Query Sheet, for the selected connector and schema, the "Auto SQL" mode was previously switching unexpectedly to "Advanced" mode and preventing users from selecting tables that have been resolved. Now, users can successfully stay in "Auto SQL" mode and are able to select tables without any issues.

Jobs

  • In Jobs, there was an issue where, during the crawling of a connector, the job logs had null errors and displayed the job status as "Partial Success". This issue has been addressed and now the job logs are working as intended and displaying the appropriate status.

  • In Jobs, there was an issue where the job log displayed zero datasets, even when the connector contained data objects. This issue has been resolved, and now the job log correctly displays all datasets as expected.

Advanced Tools

  • In Advanced Tools > Impact Analysis, the previous process involved running an Impact Analysis (IA) on a large set of source objects, which resulted in a substantial number of impacted objects (over 10,000). When users attempted to view these impacted objects, a blank page was displayed, and they encountered a 504 error. Additionally, when Impact Analysis was run again on the same set of objects the next day, a 502 error occurred, leading to a system crash. The issue has been resolved, and users can now successfully view the impacted objects without encountering the 504 error, and running Impact Analysis on the same set of objects no longer leads to a 502 error or system crash.

  • In Advanced Tools > Impact Analysis, duplicate queries were identified previously after running an impact analysis for a Looker report column. The responsible code for these queries was identical, and the only variation was in the date field within the queries. The issue has been resolved and when running an impact analysis for a Looker report column, no duplicate queries are generated. 

  • In Advanced Tools > Impact Analysis, users previously encountered an error with HTTP status 500 - Internal Server Error when using the Impact Analysis feature. This issue prevented them from performing the desired Impact Analysis and understanding the potential impact of changes in the system. The issue has been resolved, and the users no longer encounter the HTTP status 500 - Internal Server Error when using the Impact Analysis feature in Advanced Tools.

  • In Advanced Tools > Build Auto Lineage, the Redshift system was failing to build lineage for stored procedures, resulting in a "parse failed" error. This error indicates that there are syntax or structural issues within the stored procedures, which are causing inaccurate data flow representation. This issue has been resolved helping users better understand the flow of their data and improve data management and analysis capabilities.

  • In Advanced Tools > Build Auto Lineage, Power BI was experiencing an issue with incomplete lineage for reports in the reports group. It failed to capture all data sources and dependencies on one of the model reports. The issue has been resolved, and users can now rely on accurate data tracing and have a better understanding of data dependencies in the reports

  • In Advanced Tools > Build Auto Lineage, users faced challenges while attempting to build the Redshift lineage. Certain SQL conditions caused complications, resulting in an inability to trace the origin and flow of data within Redshift. The issue has been successfully resolved with the implementation of fixes from the OvalEdge Lab. Users can now effectively trace the origin and flow of data within the Redshift data platform. 

  • In Advanced Tools > Build Auto Lineage, users faced an issue while attempting to build lineage for Power BI reports. They encountered a "string out of index" error message, indicating a problem with string manipulation or indexing within the lineage-building process. The issue has been resolved. The necessary adjustments have been made to the string manipulation and indexing processes. Now, users can successfully build lineage for Power BI reports without encountering the "string out of index" error.

  • In Advanced Tools > Build Auto Lineage > Teradata lineage, there was an issue with the "Replace" functionality during query validation without spaces in the query correction process. This problem was impacting the crawl of Teradata and the process of building lineage for all the source codes. The issue has been resolved. The "Replace" functionality now works as expected during query validation and correction, even for queries without spaces. The crawl of Teradata and the lineage-building process now function smoothly without any problems, and the status filter accurately reflects the validation results.

  •  In Advanced Tools > Build Auto Lineage > Azure Synapse lineage, some source code procedures were previously unable to parse correctly, resulting in the failure to build lineage for those codes. This issue was causing difficulties for users in building lineage for codes. The issue has now been resolved. All source code procedures parse correctly, enabling users to build lineage successfully for all codes. 

  • In Advanced Tools > Build Auto Lineage, some reports faced challenges building lineage due to variations in the format of the data source caption and ID, leading to incomplete lineage information. To address this, logic has been added to handle the variations in the data source caption and ID formats, enabling accurate lineage building for all reports, including those previously affected.

  • In Advanced Tools > Build Auto Lineage > Azure Synapse, previously, building lineage for source codes sometimes encountered challenges in parsing the query.  This impacted the ability to build lineage for these specific code segments in Azure Synapse. The issue has been resolved, and users can now successfully build lineage for these specific code segments without any issues.

  • In Advanced Tools > Build Auto Lineage, previously, the lineage could not be built for specific source codes because procedures could not be parsed or analyzed successfully during the process. This issue has been resolved and the lineage can be built successfully. 

  • In Advanced Tools > Build Auto Lineage, there was a previous inability to access the complete source code and missing data source information in the source code lineage. The issue is resolved, and users can now access the complete source code.

  • In Advanced Tools > Build Auto Lineage, Power BI tables were not included in the lineage construction process, even though validation, crawling, and user-provided source connection names were implemented. This issue has been resolved, and users can now successfully create lineage and retrieve tables from the specified source connection names during crawling without encountering any errors.

  • In Advanced Tools > Load Metadata from Files, users previously encountered an error message in the job logs after attempting to create a role with specific permissions. In the user interface (UI), the role was seemingly created with default permissions. The issue has been resolved, and users can now create roles with specific permissions in the LMDF within the Advanced Tools without encountering error messages in the job logs. 

  • In Advanced Tools > Load Metadata from Files, users were previously unable to upload Data Quality Rules (DQRs) that included Data Templates. Users were unable to upload values like business descriptions, tags, and custom field data (text, code, number, date) along with the DQR in the job logs. The issue has been resolved, and users can now upload Data Quality Rules (DQRs) with Data Templates without any errors in the job logs.

  • In Advanced Tools > Load Metadata From Files, users previously faced an issue where they couldn't upload Term Names and Custom field values from the "Load Metadata from Files" feature, even after creating a domain with 32 characters and adding terms to it. The upload process would fail when using the downloaded Business Glossary template, causing difficulties in data management. The issue has been resolved, and users can now successfully upload Term Names and Custom field values from the "Load Metadata from Files" feature in Advance Tools.

  • In Advance Tools > Load Metadata From Files > Schema Template, previously users encountered a situation where custom fields were not updating through the template when associated with data. Users expected the ability to add and modify custom fields through the Schema Template and have them reflected in the UI, but this was not happening. This issue has been resolved, users can now successfully add and modify custom fields through the Schema Template, and these changes will be reflected in the UI when associated with data. 

  • In Advanced Tools > Load Metadata from Files, previously users could not map lineage information to table columns when using the uploaded lineage template, causing functionality issues. The issue has been resolved, and users can now map lineage information to table columns by uploading the lineage template.

  • In Advanced Tools > Load Metadata from Files, previously when users attempted to download the Users template with a data file, they encountered a 'Problem Occurred' error. The issue has been resolved and users can now download the Users template with a data file. 

  • In 'Advanced Tools' > 'Load Metadata from Files,', previously when users clicked twice on a selected object's radio button, it deactivated the 'Next' button. The issue has been resolved, and users can now select objects twice, and the 'Next' button remains enabled.

  • In Advanced Tools > Load Metadata from Files, users previously encountered an issue while attempting to update table terms using the downloaded template data for tables. This problem caused difficulties in managing table-related terms effectively. The issue has been resolved, and users can now update table terms seamlessly using the downloaded template data for tables in the Load Metadata from Files.

  • In Advanced Tools > Load Metadata from Files > Data Quality Rules (DQR), there was a previous issue where users attempting to download the template for DQR with data found that the custom fields were not arranged in series. The issue has been resolved. Users can now download the template for DQR with data from the "Load Meta Data from files" feature in Advance Tools, and the custom fields are arranged in series.

  • In Advanced Tools > Load Metadata from Files, when using the "Load Metadata" functionality to update the Business Glossary template with existing data, neither custom fields nor business descriptions could not be modified previously.  This issue has been resolved; custom fields and business and detailed descriptions can now be modified using the "load metadata from files" functionality.

  • In Advanced Tools > Load Metadata from Files, the DQR with Data Template feature was previously disabled by default. This prevented users from loading metadata from files. The issue has been resolved by re-enabling the feature.

  • In Advanced Tools > Load Metadata from Files, users couldn't previously enable a data download button under the DQR tab due to comma restrictions in tag creation and editing.  The issue has been resolved and users can now enable the data download button under the DQR tab while still maintaining the flexibility to create and modify tags with commas.

  • In Advanced Tools > Load Metadata from Files, hyperlinks added to the business description with data were not previously displayed as clickable links when the template was reuploaded. Instead, they appeared as plain text. This issue has been resolved, and the hyperlinks are now clickable.

  • In Advanced Tools > Load Metadata from Files > Table Columns, users were previously required to provide inputs in the 'Action' column when uploading a template. When users uploaded a template without filling in the required 'Action' column, job logs did not issue a warning message. The issue has been resolved, and users now receive a warning message when they fail to update the 'Action' column in the uploaded template.

  • In Advanced Tools > Load Metadata from Files, changes have been made in the arrangement of columns in the "Load Metadata from Files" section. Specifically, the Code Custom field columns have been reordered to appear after the Text Custom fields. Additionally, an issue with custom fields created within the application, where the 'Non-editable' checkbox was selected but values entered in the template were still updating in the UI, has been resolved. Now, values entered in the template do not update in the UI when the 'Non-editable' checkbox is enabled (as expected). 

  • In Advanced Tools > Load Metadata from Files, users previously encountered an error when downloading a table column template with data. The issue is now resolved and users are able to download the Table column template.

  • In Advanced Tools > Load Metadata From Files, users previously encountered a problem when loading metadata which had governance role names containing special characters like single quotation marks or others. When these names were updated in the template, they were not displaying the full names correctly, causing problems. Now, the system fully supports special characters in governance role names, ensuring an accurate and complete display of role names in templates.

  • In Advanced Tools > Load Metadata from Files, users previously encountered an issue where the table's business description was getting deleted when uploading table column definitions using the "Load metadata from file". This issue has been resolved, and now it's working as intended without deleting the table’s business description.

  • In Advanced Tools > Load Metadata From Files, when uploading the Table Columns (TC) template, including terms and masks for table columns, there was an issue where the changes made were not reflected in the UI. This issue has been addressed and the terms and applied masking on the TC for table columns are properly reflected in TC rewrite as expected.

  • In Advanced Tools > Load Metadata from Files > Business Glossary, users previously encountered an issue where modifications made to the "Template with Data," including Custom Fields, Business or Detail Descriptions, were not reflected in the application when the template was uploaded. This issue has been solved now and modifications made to the "Template with Data" are accurately reflected in the application upon uploading.

  • In Advanced Tools > Deep Analysis, users previously experienced a problem where the system retrieved older tables despite recent profiling. Users found that when they ran the Deep Analysis job to gain insights into data tables, it failed to reflect the most recent changes and updates in the tables. The issue has been resolved, and now the "Deep Analysis" job effectively fetches the most up-to-date data from the tables during profiling, ensuring accurate analysis results.

  • In Advanced Tools > OvalEdge APIs, previously, field naming conventions caused confusion between term retrieval. This issue has been resolved by standardizing field naming conventions across all APIs.

  • In Advanced Tools > OvalEdge APIs > Data Quality Rule API, the "Last run time" was displayed in nanoseconds, leading to a mismatch with the user interface (UI). This issue has been resolved by modifying it to a valid time format.

  • In the Advanced Tools > OvalEdge APIs> Business Glossary API, DQ Rule associations previously appeared in the API response despite the latest version intended to exclude them. This issue has been resolved by removing the Data Quality Rules from the API response, ensuring the Business Glossary associations are correctly displayed in the API response.

  • API Fixes - Several refinements have been implemented in the OvalEdge APIs, encompassing changes in field names, field removals, and updates to descriptions.

Administration

  • In Administration > Connectors, the Teradata database does not provide a built-in way to get the data types for the columns in a table. This impacted the performance and accuracy of data analysis queries. The issue has been resolved, and now users can retrieve data types for the columns in a table in the tera data database.

  • In Administration > Connectors, the Dell Boomi connector interface had a problem where the "settings" option was inaccessible when clicking on the 9 dots symbol. The issue is now resolved, and the "settings" option is accessible in the Dell Boomi connector. 

  • In Administration > Connectors > RDAM, the previous attempt to update the password for the Redshift RDAM in OE (OvalEdge) for the Connector user failed. As a result, users were unable to access the Redshift database. The issue has been resolved, and users can now successfully access the Redshift database.

  • In Administration > Connectors > RDAM, the masking policy was not reflected in the source after enabling it for columns. Data remained unmasked, impacting data privacy and security. This issue has been resolved, and users can now ensure that the masking policy is properly reflected in the source, which helps to protect data privacy and security.

  • In Administration > Connectors > RDAM, previously actions were not logged in audit trails when revoking permissions remotely, causing a gap in auditing. The issue is resolved, and audit trails now capture permission revocations accurately.

  • In Administration > Connectors > RDAM, users previously encountered difficulties mapping a user to a group when OvalEdge was configured as the Master. This prevented users from being assigned to groups, affecting user permissions and access control. The issue with Redshift RDAM has been resolved, and now users can be successfully mapped to their respective groups even when OvalEdge is configured as the Master.

  • In Administration > Connectors > RDAM, users previously encountered an issue where they were unable to revoke a user assigned to a schema when the remote system (OvalEdge) was configured as the master. This prevented users from being removed from schemas, potentially impacting user permissions and data security. The issue has been successfully resolved. Users can now revoke users from a schema even when the remote system (OvalEdge) is configured as the master, ensuring proper access control and security measures.

  • In Administration > Connectors > In Profiling, users previously encountered missing column summary information (distinct counts, null counts, top values) after performing a clone operation. The discrepancies raised uncertainty about whether it was a bug or related to crawling, profiling, or sampling processes. The issue has been resolved. Now, after the clone operation, the column summary includes accurate distinct counts, null counts, and top values, providing users with the necessary information for data analysis and understanding.

  • In the Administration > Connectors > Redshift connector, users encountered an issue with RDAM that prevented them from viewing the remote privileges of a user in the User List of the Governance Catalog Module, particularly in the "Remote Access" section. This limitation restricted users from effectively managing and monitoring user access to the database. The issue has been successfully resolved, and users can now view the remote privileges of a user in the User List of the Governance Catalog Module, specifically in the "Remote Access" section. 

  • In Administration > Connectors, users faced an issue where they couldn't delete a Group from the connector's Roles Tab in Redshift RDAM. Despite clicking the delete option, an icon pop-up appeared, but the deletion didn't happen, and the icon remained visible. This problem has been addressed, and now users can easily delete a Group from the connector's Roles Tab without any complications. 

  • In Administration > Connectors, users encountered a problem where they couldn't view reports in the Build Lineage page while trying to build lineage for the Tableau Connector. The issue has been resolved, users can now view the required reports and successfully build data lineage, enabling them to effectively track and understand the data flow.

  • In Administration > Connectors, a problem arose from the team's decision not to use big data jars due to security concerns. This decision caused issues with data access and viewing. The problem has now been effectively resolved by implementing duckdb. As a result, users can access and view the data without encountering any errors.

  • In Administration > Connectors, users encountered an issue where they could validate the Delta Lake connector with incorrect credentials successfully, even though the validation process should have resulted in failure when incorrect information was provided. This raised security concerns and highlighted the need to improve the validation process to prevent unauthorized access. The issue has been resolved. The validation process now correctly detects and handles incorrect information, ensuring that unauthorized access is prevented, and users can securely validate the Delta Lake connector with accurate credentials.

  • In Administration > Connectors, users previously encountered an issue where lineage information was not populating for certain segment tables in the attached source codes, and the mquery (Power Query) was not visible in the reference tab for these tables. This limitation hindered data flow tracing and understanding the data source and transformations within OvalEdge. The issue has been resolved. Lineage information is now correctly populating for all segment tables, and the mquery (Power Query) is visible in the reference tab

  • In Administration > ADM Connectors, the Power BI (PBI) report previously encountered an issue where duplicate queries were being generated, leading to incorrect data representation. This issue had adverse effects on data accuracy and report performance. The issue has been resolved. As a result, the Power BI (PBI) report will no longer generate duplicate queries, and the data displayed in the report is now accurate.

  • In Administration > ADM Connectors, users previously faced an issue where they were unable to view remarks or comments when updates were made to a user's profile or permissions. This lack of visibility hindered their ability to access additional context or details about the modifications made. The issue has now been resolved. Users can now see the remarks or comments associated with any updates made to a user's profile or permissions, providing them with a more comprehensive understanding of the changes made.

  • In Administration > Connectors > RDAM, specifically in the User Management section, there was an issue where users could add conflicting privileges "nocreatedb" and "Createdb" simultaneously. This posed a potential security risk as it allowed a user to have the ability to both create databases and prevent other users from creating databases. The problem has been resolved. Now, when users attempt to add conflicting privileges, the system displays an appropriate error message, preventing the addition of conflicting privileges and ensuring proper access control and data security.

  • In Administration > Connectors > ADM Connectors, users faced an issue where they couldn't view the remarks or comments associated with updates made to a user's profile or permissions. The issue has been resolved. Users can now access and view the remarks or comments for any updates made to a user's profile or permissions, enabling better tracking and understanding of account modifications.

  • In Administration > Connectors > DB2 ODBC connector, users faced an issue related to sample profiling. When attempting to perform sample profiling, the system failed to generate the expected profile results, hindering users from obtaining the necessary data insights and statistics. The issue has been resolved, and users can now perform sample profiling successfully, obtaining the expected profile results without any complications.

  • In Administration > Connectors > Redshift connector, users faced an issue with RDAM that prevented them from viewing the remote privileges of a user in the User List of the Governance Catalog Module, particularly in the "Remote Access" section. This limitation posed challenges in effectively managing and monitoring user access to the database. The issue has been successfully resolved. Users can now view the remote privileges of a user in the User List of the Governance Catalog Module, specifically in the "Remote Access" section, ensuring smoother access control and data security.

  • In Administration > Connectors > RDAM, there was an issue where permissions granted by a user to others were not automatically revoked when the user was deleted. This resulted in other users retaining access to the data even though the granting user was no longer present. The issue has been resolved. Now, users can successfully revoke permissions for schemas and tables even after deleting the associated user. 

  • In Administration > Connectors, previously, SFTP connector's File Manager encountered problems when used with the Bridge and cataloging. Users faced issues with null profile files, data unavailability in the data tab, and errors while downloading files from the 9 dots menu option.  The issue has been resolved; the SFTP connector's File Manager now functions properly with the Bridge and cataloging. Users can access the profile file without any null values, view data in the data tab, and download files from the 9 dots menu option without encountering any errors.

  • In Administration > Connectors, previously, when users attempted to search for the third page using the toolbar, they encountered a problem where the search results did not display the correct message or information. The issue has been resolved, and users should no longer encounter the problem of incorrect messages or information when searching for the third page in the toolbar within the Connectors section.

  • In Administration > Connectors > Snowflake, users previously encountered errors when attempting to profile the schema of the database. These errors were reported in the context of job status, which was marked as "partial success." The issue has been resolved, and users no longer encounter errors when profiling the schema of the database in the Connectors > Snowflake section. The job status now indicates a successful outcome without "partial success" markings.

  • In Administration > Connectors > Talend, previously the connector was unable to retrieve files from the Azure Report. When attempting to crawl the Talend connector, users encountered a 404 error. This error is confirmed by checking the Tomcat log. Consequently, the connector cannot successfully fetch files from the Azure Repository. The issue has been resolved and users can now fetch files successfully from the Azure repository.

  • In Administration > Connectors > SQL Connector, profiling was not working as expected, even when the sample profile size was set to 90000. Users were unable to obtain results based on this profile size, and there was a question regarding the editability of the sample profile, which should have allowed users to input their own values. The issue has been resolved, and profiling in the SQL Connector now functions as expected when the sample profile size is set to 90000. Users should be able to obtain results based on this profile size, and the sample profile should accept user-provided values as intended.

  • In Administration > Connector > Azure Data Factory (ADF), there was a challenge with transferring "Pipeline_Run_params" objects to OvalEdge in the production environment. This functionality has been addressed and users can now transfer data functions without any problems.

  • In Administration> Connectors > ADM Connectors, previously when users logged into the OvalEdge Application and navigated to the Connectors section to crawl the Power BI connector, they encountered a problem with the generation of report columns. Specifically, for some reports, the report columns were not being generated as expected after crawling the Power BI connector. The issue has been resolved, and users should now be able to log in to the OvalEdge Application, navigate to the connectors section, and successfully crawl the Power BI connector with the report columns being generated correctly as expected.

  • In Administration > Connectors > ADM Connectors, while connecting to the database using Azure Key Vault during migration, users previously encountered a challenge in reading the JDBC URL. The issue has been resolved by making it a non-mandatory requirement. 

  • In Administration > Connectors > ADM Connectors, slow job execution was observed when crawling data from a SAP HANA connection. The issue has been resolved, and users no longer experience delays in their data extraction or synchronization tasks, resulting in improved performance.

  • In Administration > Connectors, users were unable to create data lineage, which is the visualization of data flow and relationships, due to differences between the caption (a descriptive label) and the ID (a unique identifier) of the data source within a Workbook. The issue has been resolved; users can now create data lineage successfully even when there are differences between the caption and the ID of the data source within a Workbook.

  • In Administration > Connectors > ADM Connectors > MongoDB, previously, certain schemas were failing to be crawled due to extended crawling times required to retrieve the complete data. The issue has been resolved, and users can now crawl MongoDB schemas without encountering extended crawling times or failures.

  • In Administration > Connectors > ADM Connectors, within the Workday connector, some columns were reported as missing or were not appearing as expected. The issue has since been resolved, and the missing columns in the Workday connector within ADM Connectors have been restored or addressed, ensuring that all columns appear as expected.

  • In Administration > Connectors, users were previously unable to access and manage bridge configurations in the Connectors module's "Manage Bridge" feature for "ovaledge.connector.creator." They encountered a "No Data Exists" message and an "Access Denied" error, even for the "OE_ADMIN" user. Adding the "author" role in System Settings did not resolve the problem, highlighting a role-based access issue. Only "OE_Admin" could access data when their role was in the configuration. The issue has since been resolved, and users can now access and manage bridge configurations in the Connectors module's "Manage Bridge" feature for "ovaledge.connector.creator" without encountering the "No Data Exists" message or "Access Denied" error.

  • In Administration > Connectors, previously, when users tried to profile tables, an issue occurred. The problem stems from the profiling process exceeding the preset maximum value, resulting in an error. The issue has been resolved and users can now profile tables as required.

  • In Administration > Connectors, users encountered an issue when attempting to validate a connector with a bridge, resulting in an error message, signifying that the validation had failed. When attempting validation without using the bridge, it was successful. The issue has been resolved and users can now validate the connector with or without using the bridge option.

  • In Administration > Connectors > Build lineage, when users attempted to export source code to file in build auto lineage, It was not generated at the temporary location as expected. Clicking the download link resulted in a blank page. The issue has been resolved and users can now download the source code in the temp location as well as via the local path.

  • In Administration > Connectors > XML Connector, there was a metadata import issue (VSP) where imported XML columns were incorrectly set to 'string' with a length of '0'. This resulted in incomplete table column metadata. The issue has been resolved, and Table column metadata is complete and accurately reflects the data types and column lengths

  • In Administration > Connectors > Snowflake connectors, the crawling process previously encountered a problem with schemas. The UI job logs displayed a warning message, and the crawling process omitted Stages (data objects like views and procedures), resulting in incomplete data retrieval. This caused data gaps and inaccuracies in the crawled data. The issue has been resolved. The crawling process now works as expected, and the UI job logs no longer display warning messages. Stages (Data objects like views and procedures) are properly included during the crawling process. 

  • In Administration > Connectors > Redshift RDAM (Redshift Data Access Manager) with OvalEdge as the master connector, previously, revoking a Role/Group assigned to a connector user did not update the source. The issue is now resolved, and revoking a Role/Group correctly applies the change at the source.

  • In Administration > Connectors > Crawl & Profile, when crawling schemas in Snowflake, a warning message was previously displayed in the UI job logs, and it seems to ignore views and procedures for stages. This issue has been resolved and the users are able to crawl schemas without any warning message.

  • In Administration > Connectors > Crawl & Profile, users had to manually configure different temporary paths each time they downloaded templates, data, or conducted ADL profiling. This was a time-consuming and error-prone process. This issue has been resolved by providing two different paths (e.g. TEMP and PATH). 

  • In Administration > Connectors, previously, when a user was created and assigned a role or group in the connector user and roles modules, revoking the role or group did not update the source. The role or group continued to be displayed for the user in the source, even after it had been revoked in the application. This issue has been resolved,  and the role or group is updated in the source. 

  • In Administration > Connectors, users were unable to update the password for the connector user in the OvalEdge application. This was preventing the user from logging into the remote Redshift using the updated password. This issue has been resolved and users are able to update the password for the connector user in OvalEdge. 

  • In Administration > Connectors, users were unable to revoke a user assigned on a schema. This issue has been resolved and users are now able to revoke users assigned on schemas.

  • In Administration > Connectors, when OvalEdge was set as the master in the connection settings, roles, groups, and users were not created in the remote system. This issue has been resolved and the users are created in a remote system when OvalEdge was set as the master in connection settings.

  • In Administration > Connectors > Crawl & Profile, previously, profiling results for SQL Server connections were empty. This included null count, distinct count, and other results. The issue has been resolved and Profiling results for SQL Connection are displayed.

  • In Administration > Connectors > Crawl & Profile, previously there was an issue where users couldn't see data under the Data tab when profiling a file in Azure Data Lake protected by Azure Key Vault. The issue has been resolved, and users can now effectively profile files in Azure Data Lake secured with Azure Key Vault, with the data becoming visible under the Data tab as expected.

  • In Administration > Connectors, users previously encountered an issue wherein they were unable to delete a Group from the Connectors Roles Tab. The issue has been resolved, and users can now seamlessly delete Groups from the Connectors Roles Tab as needed.

  • In Administration > Connectors, when conducting file profiling within Azure Data Lake secured by Azure Key Vault, an issue arose where the profiled data was not appearing within the Data tab. The issue has been resolved and the profiled data is now visible as expected under the Data tab.

  • In Administration > Connectors, users were previously unable to crawl the DBT connector due to the absence of a "Manifest.json" file in certain "RUN" files, and the crawling of those files remained unsuccessful. This issue has been resolved and users can now validate and crawl the DBT connector successfully. 

  •  In Administration > Connectors > Delta Lake, users encountered difficulties when profiling tables due to a syntax error, which resulted in a partial success status. This issue has been resolved by configuring the syntax, and users can now profile tables as expected.

  • In Administration > Connectors, the OvalEdge tool previously did not support connector views that contained the "UNPIVOT" operation. This issue has been resolved, and OvalEdge now supports the "UNPIVOT" operation in all queries and for all connectors

  • In Administration > Connectors > Redshift > Crawl & Profile, while crawling views and materialized views, an unwanted code, specifically an extra "Create view or Replace View View_name As" snippet, was being added to materialized views. This problem has been resolved, and users can now crawl and profile views without encountering any errors or unwanted code.

  • In Administration > Connectors > Denodo Connector, users were previously unable to build lineage due to the absence of JSON files, resulting in parse failure errors. This issue has been resolved and users can now build lineage successfully.

  • In Administration > Connectors > Crawl & Profile page, selecting multiple schemas at once resulted in an Internal Server Error. This issue has been resolved, and users can now select multiple schemas simultaneously for crawling and profiling without encountering this problem.

  • In Administration > Delta Lake Connector, there was an issue where users could validate the connection using invalid credentials. This issue has been resolved, and users can validate the connection only using valid credentials.

  • In Administration > Connector, for the S3 connector, an issue was identified where, after applying crawler rules using regex and recrawling the connector, the job log indicated a successful job, but no data was visible in the file manager. This issue has been resolved, and now the folders are correctly displayed on the file manager page as expected.

  • In Administration> Connectors, when adding a new Tableau connector and selecting "Yes" for token-based authentication, the labels were incorrectly shown as "Username" and "Password" instead of the expected "Token Name" and "Token".This issue has been resolved, and now it correctly displays the expected label names.

  • In Administration > Connectors > DB2 connector, an issue occurred when users enabled the bridge. Users encountered a problem where the "Select important schemas for crawling and profiling" pop-up window appeared empty, preventing the selection of schemas before crawling and profiling. This issue has been effectively resolved. Users can now access and select the preferred schemas for the DB2 connector as intended.

  • In Administration > Connectors, users encountered an access denied issue for all the connectors.  This issue has been successfully resolved and users can now access the desired connectors as intended.

  • In Administration > Connectors > Power BI On-Premise environment, a few issues have been addressed, 

    • Certified Report Pop-up Not Displaying, Previously, when users logged in via the Chrome Plug-in and attempted to open certified reports in Power BI using the nine dots menu, the Certified Pop-up did not appear as expected. This issue has now been resolved, and the Certified Pop-up appears as expected.

    • Error Message "Charts Not Found", Another issue occurred when published terms were applied to Power BI report columns and users tried to open these reports from the nine dots menu. In such cases, an error message stating "Charts Not Found" was displayed. This issue has been fixed. 

  • In Administration > Connectors > S3 connector, previously, there was an issue where data crawled and cataloged from Parquet and Yaml files was not displayed in the Data Catalog's Data tab. This issue has been resolved. Now, data contained within the Parquet and Yaml files is clearly visible and accessible in the Data Catalog's Data tab. 

  • In Administration > Connectors > AWS Athena connector, an issue was identified where, after data was crawled and cataloged, users encountered query execution errors when profiling on tables. As a result, profiling operations could not be successfully executed, and the required profile statistics were not displayed. This issue has been resolved. Users can now profile tables without encountering query execution errors, ensuring accurate and informative profile statistics are displayed as expected. 

  • In Administration > Connectors > Redshift connector, an issue occurred after successfully establishing the connection and building lineage. Some tables from the base to staging were omitted, resulting in incomplete lineage records. This issue has been resolved. Now, the process successfully captures all tables, ensuring a comprehensive lineage representation of your data. 

  • In Administration > Users & Roles > Connector Policies, previously, when users attempted to filter by the source column, the expected "Remote" text checkbox was not displayed. Instead, the text checkbox for "Masking" was incorrectly shown. The issue has been resolved and now when users filter by the source column, the correct "Remote" text checkbox is displayed. 

  • In Administration > Users & Roles > Connector Roles, users encountered an issue where the Warehouse filter option didn't work correctly for sorting results. The issue has been resolved and users can now sort the Warehouse filter option results without any errors.

  • In Administration > Users & Roles > Connectors, users were unable to map a user to a group. This issue has been resolved and users are now able to map users to groups.

  • In Administration > Users and Roles > Connector Users, previously the application allowed the combination of user privileges, contradicting their intended exclusivity. This led to scenarios where users could simultaneously possess 'nocreatedb' and 'created' privileges, despite the expectation that one should override the other. This issue has been resolved, and users can now apply privileges in a manner that ensures proper overriding of old ones.

  • In Administration > Users & Roles,  previously, a problem occurred where if a role was linked with Domain Creator or Project Admin (in "System Settings > Users and Roles" ) and then deleted through the user interface (UI), the role still remained in the field. This issue has been fixed by adding a validation message when attempting to delete a role from the UI. This message guides users to first remove the role from the system settings under the specified key before proceeding with deletion. This improvement simplifies role management and prevents accidental deletions.

  • In Administration > Security, when users disable a governance role, its corresponding checkbox in the Data Catalog or Business Glossary should also be disabled. Previously, these checkboxes remained active, even though they didn’t work in those modules. The issue has been resolved, and when a governance role is disabled in the Administration's Security section, the corresponding checkboxes in the Data Catalog or Business Glossary are now correctly disabled.

  • In Administration > Security > Databases, previously when users selected the custodian name as the custodian filter option and applied the filter, it didn't display the expected results associated with the custodian name. The issue has been resolved and now users can get the custodian-filtered results accurately.

  • In Administration > Security > Folders, previously when searching for the Admin Roles & Users filter with user options, all roles and users were displayed, including the user option, rather than showing only the specific users' details. The issue has been resolved, and users can now filter with particular users or roles, and only the associated results are displayed.

  • In Administration > Security, the "Delete folder" option in the security section was not functional previously. When users visited the security folder page and attempted to use the "Delete folder" option, it didn't work as expected. The issue has been resolved, and users now can visit the security folder page and to delete folders as desired.  

  • In Administration > Security > Databases, previously, when using the "Governance Role 5" filter to select the "#Inbox Team," the expected results were not displayed. Selecting "Governance Role 4" and the "OE_Admin" checkbox showed the results for the "#Inbox Team." Directly selecting "#Inbox Team" via the "Governance Role 5" filter did not yield the expected results. The issue has been resolved, and Users can now directly select the "#Inbox Team" using the "Governance Role 5" filter, and the expected results are displayed as intended. 

  • In Administration > Security > Story Zone, previously, when users applied filters to view authorized roles and users in the "Story Zone," the system incorrectly displayed roles and users that should not have been available for selection. The issue has been resolved; when users filter for authorized roles and users in the "Story Zone," the system now correctly displays only the available roles and users as expected. 

  • In Administration > Security > Databases, when users selected both the Active and Inactive checkboxes in the status filter, the system displayed only the Inactive results. This behavior was not as expected- both Active and Inactive results should have been shown. This issue has been resolved, and now when users select both the Active and Inactive checkboxes in the status filter, both Active and Inactive results are displayed as expected, ensuring accurate visibility of database statuses.

  • In Administration > Security, there was an issue with role-based access for the "service desk" keyword, which caused an unresponsive drop-down menu when attempting to change the license type selection from "author" to other options. This problem was isolated to the "service desk" keyword and did not affect other keywords. The issue has been resolved. Now, in the "Security" module, users can successfully change the license type selection from "author" to other options for the "service desk" keyword without any issues with the drop-down menu. The role-based access for the "service desk" keyword is now functioning as expected, providing users with the appropriate license type selection based on their permissions.

  • In Administration > Security > Applications, there was an issue with the mouse hover behavior for the enable/disable button. When users hovered over this button to enable or disable access for viewers or contributors, it incorrectly displayed the text as "disabled" regardless of the actual status. The issue has been resolved. When users hover over the enable/disable button, it now correctly displays the appropriate status, providing a clearer user interface.

  • In Administration > Security > Databases, when users set the crawled filter option to "Yes," they were able to see the corresponding "Yes" results, and when they chose "No," they could view the associated "No" results. The problem occurred when users selected both "Yes" and "No" options together; in this case, they could only see "Yes" results. The issue has been resolved, and when users select both "Yes" and "No" options together in the crawled filter, they correctly see both "Yes" and "No" results as expected.

  • In Administration > Security > Domains, when trying to change role permissions from Read Write (RW) to Administrator (ADM), the operation functioned correctly, but when attempting to change from Administrator (ADM) to Read Write (RW), it did not work as expected. The issue has been resolved and changing role permissions from Administrator (ADM) to Read Write (RW)  is working as expected.

  • In Administration > Security > Domains, users were unable to save updates to Governance roles via the '9 dots' menu. The 'Save' button was not working as expected. This issue has been resolved and users can now save the Governance roles as expected.

  • In Administration > Security section, users previously encountered a limitation where they could not add or remove users from schemas and tables within the RDAM environment. The issue has been resolved and users now have the capability to efficiently add or remove users from both schemas and tables.

  • In Administration > Advanced Jobs, when working with AdvanceJob attributes, users discovered that copying and pasting a value cleared the entire attribute value and replaced it with only the copied part. The issue has been resolved and users can copy and paste the selected text in the attribute value.

  • In Administration > Advance Jobs, previously users faced job log errors when attempting to import Data Quality Rules using a CSV file. The upload process for the CSV file was unsuccessful, and the job log errors hindered the proper import of the Data Quality Rules. The issue has been resolved, and users can now upload CSV files to import Data Quality Rules in the Advance Jobs section of the Administration module without encountering any errors.

  • In Administration > Advanced Jobs, while attempting to upload a CSV file through the Advanced Jobs tool in the OvalEdge Application UI, the user was not previously receiving the Stats Query and Failed Query functionalities. This issue has been resolved and the user is able to view all the queries in the associated objects in the data quality.

  • In Administration > Advanced Jobs, users encountered a problem where the Data Quality Report (DQR) score was not visible on the quality index page after migrating to a new release version. To address this issue,  an advanced job "DQ Score calculation for 61" is designed to update the DQR score, ensuring it reflects the most current and accurate data quality information. As a result, users can now access and view the DQR score on the quality index page without any disruptions.

  • In Administration > Advanced Jobs, previously, there was a problem when users executed the "Discover Primary and Foreign Key Recommendations" advanced job (by providing the required attributes such as connector ID, name and schema ID, or name and if the user selected the option to build relationships across schemas as true) and ran the advanced job. The system was unable to establish relationships between new objects and existing objects. This issue has been resolved. Now, when users run this advanced job and select the option to build relationships across schemas, the system accurately builds relationships between the selected schemas.

  • In Administration > Schedule, previously, when attempting to select and save a schedule state (such as "active" or "inactive"), the system returned an error with the message "problem occurred." The issue has been resolved, and users can now successfully select and save a scheduled state (e.g., "active" or "inactive") without encountering the "problem occurred" error. 

  • In Administration > System Settings, when users attempted to filter the “Last Updated By” column by a specific name, the filtering process didn’t effectively narrow down the results. Instead, it continued to display all available columns. This issue was observed across multiple system paths, including Users and Roles, Lineage, Notifications (Settings), SSO, Proxy, Connector, and AI. The issue has been resolved, and users can now successfully filter the “Last Updated By” column by a specific name in all the system paths.

  • In Administration > System Settings > Notifications & Alerts, previously, Slack notifications were coming in HTML format instead of the expected enriched Markdown format. This occurred when setting up Slack Integration in the OvalEdge application and executing Data Quality Rules. The issue has been resolved and users can now receive Slack notifications in the Markdown format when setting up Slack Integration in the OvalEdge application and executing Data Quality Rules.

  • In Administration > System Settings > Others, users encountered an issue while trying to download the source code using the "export source code" option. Initially, the code was saved to a temporary path instead of directly downloading to the local system, leading to difficulties in accessing and managing the source code. The issue has now been resolved. When users select the "export source code" option, the code is correctly downloaded directly to their local system.

  • In Administration > System Settings > Notifications & Alerts, previously, users received notifications in their inbox for messages they are not tagged in or shouldn't have access to, violating expected behavior. Also, users could clear messages but could not delete them, an action that needs prompt resolution. This issue has been resolved, and users can no longer receive notifications for messages they are not supposed to access in their inbox. Additionally, they can now both clear and delete messages as expected, improving overall message management.

  • In Administration > Audit Trials, previously, the audit trails were not capturing the actions that were being revoked in the remote. This meant that there was no record of the actions that had been taken, which made it difficult to track changes and troubleshoot problems. The issue has been resolved and the audit trails now capture all actions that are taken in the remote, including actions that are revoked. 

  • In Administration > Audit Trials, users were previously unable to access update-associated comments made to a user within the Audit Trails. The issue has been resolved. Users can now access comments within the Audit Trails, providing a more comprehensive view of user updates and activities.

  • In Administration > Audit Trials, previously, users were unable to access the remote privileges of a user within the User List. The issue has been resolved and users can now conveniently view the remote privileges of a user directly from the User List, enhancing their ability to manage and monitor user permissions effectively.


Copyright © 2023, OvalEdge LLC, Peach