Hi all,
If you are seeing CDC errors after configuring a Unity Catalog database for an endpoint but the file structure is created and full load is working. There could possibly be permission issues that were not documented by Qlik at the time as Unity Catalog support was still fairly new at the time of writing this article.
A customer had these errors:
Stream component 'st_0_ADBK_lhouse_NAR' terminated
Stream component failed at subtask 0, component st_0_ADBK_lhouse_NAR
Error executing command
Failed to create net changes table for bulk apply
Execute create net changes table statement failed, statement CREATE TABLE `brznar`.`attrep_changesF3BCBBB5743E9D82` ( `seq` INT NOT NULL, `col1` VARCHAR(255), `col2` VARCHAR(255), `col3` VARCHAR(255), `col4` VARCHAR(255), `col5` VARCHAR(255), `col6` VARCHAR(255), `col7` VARCHAR(37), `col8` VARCHAR(37), `seg1` VARCHAR(255), `seg2` VARCHAR(255) ) USING csv options('multiLine'='true','FIELDDELIM'=',','nullValue'='attrep_null') LOCATION 'abfss://qlik@infohubazedevadls.dfs.core.windows.net//QLik_Staging_Directory_NAR_2/attrep_changesF3BCBBB5743E9D82'
RetCode: SQL_ERROR SqlState: 42000 NativeError: 80 Message: [Simba][Hardy] (80) Syntax or semantic analysis error thrown in server while executing query. Error message from server: org.apache.hive.service.cli.HiveSQLException: Error running query: com.databricks.sql.managedcatalog.acl.UnauthorizedAccessException: PERMISSION_DENIED: request not authorized
at org.apache.spark.sql.hive.thriftserver.HiveThriftServerErrors$.runningQueryError(HiveThriftServerErrors.scala:56)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.$anonfun$execute$1(SparkExecuteStatementOperation.scala:609)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:41)
at com.databricks.unity.HandleImpl.runWith(UCSHandle.scala:99)
at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:501)
at org.apache.spark.sql.hive.thriftserver.SparkExec
Failed (retcode -1) to execute statement: 'CREATE TABLE `brznar`.`attrep_changesF3BCBBB5743E9D82` ( `seq` INT NOT NULL, `col1` VARCHAR(255), `col2` VARCHAR(255), `col3` VARCHAR(255), `col4` VARCHAR(255), `col5` VARCHAR(255), `col6` VARCHAR(255), `col7` VARCHAR(37), `col8` VARCHAR(37), `seg1` VARCHAR(255), `seg2` VARCHAR(255) ) USING csv options('multiLine'='true','FIELDDELIM'=',','nullValue'='attrep_null') LOCATION 'abfss://qlik@infohubazedevadls.dfs.core.windows.net//QLik_Staging_Directory_NAR_2/attrep_changesF3BCBBB5743E9D82''
This was due to wrong permissions as Qlik didn't have documentation for this at the time.
The fix is here :
From stack overflow: