Release Notes for Q4 2023

Explore the new features and enhancements added in this update!

Updated in: February 2024

Release Version: 1.16

Feature/ Enhancement Description
Capability to customize tools permissions for system-defined roles You can now modify predefined permissions for each tool within a system-defined role. Changes to tools permissions for a system-defined role are also captured in audit logs.
Enhancements in Audit Logs

Lazsa Audit Logs now offer advanced capabilities with the following enhancements:

  • Update events display both the previous and updated values of a field.

  • User roles, in addition to usernames, are now visible for improved traceability.

  • Deletion actions require users to provide comments or reasons, enhancing accountability and displaying the rationale in the audit logs.

  • You can now download the audit logs report in a PDF format based on the applied filters.

Logs available for data analytics jobs from Data Pipeline Studio Logs for data analytics jobs can now be accessed from Data Pipeline Studio. There are multiple ways of running data analytics jobs. You can either run all the algorithms in a sequence by using the Run All option or run one algorithm at a time manually. Whichever way you choose to run the job, Jenkins logs are available to view the progress of job runs or troubleshoot failed job runs from Data Pipeline Studio.
Custom libraries are now installed seamlessly on Databricks cluster from the Lazsa Platform Support is now available for seamlessly updating libraries that are required to run custom code on Databricks clusters, through the Lazsa Platform.
Data deduplication is now supported using Amazon S3

You can now create a data deduplication job by using the data quality stage with data from an Amazon S3 data lake. The job runs in two parts. In the first part, the algorithm identifies and flags duplicate records, while in the second part, deduplication is done based on the inputs provided about retaining duplicate records.

Audit logs available for Data Pipeline Studio

Audit logs are now available in Data Pipeline Studio for all activities including, but not limited to the following:

  • Add/ Delete/ Configure/ Update nodes, connectors, and stages in a data pipeline.

  • Start or stop crawlers and catalogs.

  • Import or export pipelines.

  • Save and publish pipelines.

  • Add/ Delete/ Modify scheduled runs.

  • Promote data pipelines.

  • Add or delete pipeline versions.

  • Run/ Terminate/ Resume pipeline runs.

Pipeline run history enhanced with pipeline version details Pipeline run history is now enhanced with information related to pipeline versions.
Pipeline run can be terminated automatically after a specified limit It is now possible to prevent a pipeline from running infinitely or going into a “hanged” state. The pipeline run is terminated after the specified duration set in the Pipeline Run Timeout parameter .
Data Pipeline Studio supports creating secret scopes in Databricks from the Lazsa Platform Data Pipeline Studio now supports creating secret scopes in Databricks from the Lazsa Platform, making it convenient to manage Databricks secrets.
Special character handling in Snowflake tables Data Pipeline Studio now handles special characters of Snowflake tables for data transformation and data quality jobs. This ensures that the jobs run successfully despite having special characters in column names and table names.
Rejected records stored in a separate table Records that are rejected during the data quality checks can now be stored at a specific location or in a separate table.
User provisioning support for Bitbucket Cloud You can grant read, write, and admin permissions to a Bitbucket Cloud instance for custom product roles that you create. This feature lets you control permissions to a Bitbucket Cloud instance at a granular level.
Lazsa Orchestrator Agent support available for data tools deployed in Microsoft Azure

The Lazsa Orchestrator Agent support is now available for the following tools and technologies deployed in a Microsoft Azure account:

  • Databricks

  • Snowflake

  • REST API

  • RDBMS (Amazon Redshift, MS SQL, MySQL, Oracle, PostgreSQL, Snowflake)

  • Qlik Sense (with Snowflake as target )

For additional details about tools and technologies supported by the Lazsa Orchestrator Agent refer to the following link:

Tools Supported by Lazsa Orchestrator Agent.

Latest LTS versions of technologies supported

Latest LTS versions of the following technologies are now supported in the Lazsa Platform:

  • Java 17 with GraphQL Spring Boot - Gradle

  • Java 17 with GraphQL Spring Boot - Maven

  • NestJS 9

  • Next.js 13

  • Node.js 18.14 without Express

  • Nuxt - 3.2.2

  • React 18.2.0

  • React 18.2.0 without TypeScript

  • React 18.2.0 with TypeScript - Yarn

  • React 18.2.0 without TypeScript - Yarn

TypeScript support available for UI technologies

TypeScript support is now added for the following UI technologies:

  • Next.js 13

  • Nuxt 3

  • Vue.js 3.2

  • React 18.2.0

  • Node.js 18.14.0

Support for security groups for AWS and Azure resources When configuring AWS and Azure cloud account connection details in Lazsa, you can now specify the security groups for your cloud resources. Specified security groups are available for selection in the cloud instance and network settings configuration in the deployment workflow. This enables you to manage and control the network access rules for the instances associated with the configured cloud account.
Ingress controller configuration option in Kubernetes cluster connection details You can now configure an ingress controller in your Kubernetes cluster connection details. This acts as a predefined ingress controller configuration for all the deployments within the cluster. You can customize this configuration at a stage level in the Deploy phase.
Flexibility to choose from multiple artifacts management tools in Deployment Workflow stages While creating or editing a stage in the Deployment Workflow, now you have the flexibility to select the desired artifacts management tool from a dropdown list of tools configured in the platform.
Prometheus Monitoring of Kubernetes Cluster In the Kubernetes cluster connection settings, you can now enable cluster monitoring by using Prometheus (an open-source monitoring and alerting toolkit). After you enable this option, Prometheus and Grafana (an open-source analytics and interactive visualization application) are installed on your Kubernetes cluster. In the configuration details of your deployed technology, you can access your monitoring URL. This monitoring helps you collect and analyze metrics related to containers, nodes, pods, services, and other Kubernetes infrastructure components.
Approval workflow implemented for Terraform scripts execution Approval workflow for Terraform scripts execution is now available in Lazsa. If you enable this workflow, users can execute Terraform scripts only after they are reviewed and approved by a designated approver. This adds an extra layer of control and ensures that Terraform deployments align with your organization's policies and standards.
Improvement in response time of dashboards The dashboards now show a significant performance improvement and the data in dashboards now gets loaded more quickly than before.
Enhanced Role Filtering Now you can refine your roles search by using filters for custom roles and system-defined roles at the platform and product levels. With this enhancement, you have greater flexibility and precision in role management.
Deletion of unused configurations In the Cloud Platforms, Tools & Technologies section, you can now delete a saved configuration for a tool if it is not currently in use. However, if the tool is being used, deletion is allowed only after you resolve the associated dependencies.
New version of Lazsa Orchestrator Agent available

The Lazsa Orchestrator Agent is updated to version 0.2.64. On the Lazsa Orchestrator Agents screen, click Update Available for instructions to upgrade the agent to the latest available version.

See Lazsa Orchestrator Agent.