v.22.12
Bug Fix
39- Fixed Deadlock Bug in Asynchronous Inserts
- Fix Logic in AST Level Optimization for normalize_count_variants
- Fix checksum mismatch issues preventing mutations from progressing
- Fix skip_unavailable_shards Optimization for hdfsCluster Table Function
- Fix s3 support for ? wildcard, closes #42731
- Fix arrayFirstOrNull and arrayLastOrNull for Nullable Elements
- Fix Kafka tables UserTimeMicroseconds/SystemTimeMicroseconds accounting
- Do not suppress exceptions in web disks; fix retries for web disk.
- Fixed race condition between inserts and materialized view drops in ClickHouse
- Fix Undefined Behavior in quantiles Function to Prevent Uninitialized Memory
- Additional zero uncompressed size check in CompressionCodecDelta
- Flatten Arrays from Parquet to Resolve Data Inconsistency Issues
- Fix LowCardinality Column Casting in Short Circuit Function Execution
- Fixed SAMPLE BY Queries with Prewhere Optimization for Merge Engine
- Check and compare format_version file in MergeTreeData for table loading with changed storage policy
- Fix "No column to rollback" Error in Buffer Inserts
- Fix parser bug allowing unlimited round brackets in functions with allow_function_parameters set
- MaterializeMySQL Experimental Support for DROP TABLE DDL
- session_log: Fix rare login issue due to session_log entry creation failure
- Fix "Cannot create non-empty column with type Nothing" in if/multiIf functions
- Fix bug in row level filter with default column value
- Query with DISTINCT, LIMIT BY, and LIMIT May Return Unexpected Rows - Fixes #43377 and #43410
- Fix sumMap for Nullable(Decimal)
- Fix date_diff for hour/minute on macOS
- Fix Memory Accounting Issues Due to Merges/Mutations
- Fixed primary key analysis with toString(enum) conditions
- Ensure Status Consistency in clickhouse-copier After Partition Attach
- Recovery of Lost Replica: Atomic Table Name Swap Implementation in Replicated Database
- Fix s3Cluster function error handling for NOT_FOUND_COLUMN_IN_BLOCK
- Fix logical error in JSON parsing with nested arrays having same key names
- Fixed Exception in Distributed GROUP BY with ALIAS Column
- Fix Zero-Copy Replication Bug Causing Broken Projections
- Fix Multipart Upload for Large AWS S3 Objects
- Fixed ALTER ... RESET SETTING with ON CLUSTER for all replicas
- Fix Logical Error in JOIN with Join Table Engine Using USING Clause
- Keeper fix for interserver port conflict in Raft
- Fix ORDER BY Positional Argument Handling in Subquery Column Pruning
- Fixed Exception for Subqueries with HAVING Clause Without Aggregation
- Fix race condition in S3 multipart upload causing part number error
Build/Testing/Packaging Improvement
3Experimental Feature
3Improvement
24- Implement Referential Dependencies for Table Restoration Order from Backup
- Substitute UDFs in CREATE Query and Use as DEFAULT Expressions
- Change Query Behavior to Ensure Durability and Enable Concurrent Reads
- ERROR while parsing JSON
- Show Read Rows in Progress Indication for STDIN from Client
- Show Progress Bar for S3 Table Function in ClickHouse
- Progress Bar Displays Read and Written Rows
- filesystemAvailable now supports optional disk name argument; filesystemFree renamed to filesystemUnreserved
- Integration with LDAP: Default search_limit increased to 256 with configurable option
- Allow Removal of Sensitive Information from Exception Messages
- Support MySQL Compatible Queries in ClickHouse
- Keeper Improvement: Manual Node Leader Assignment with rqld Command
- Apply Connection Timeout Settings for Distributed Async INSERT
- unhex function now supports FixedString arguments
- Priority on Deleting Expired Parts According to TTL Rules
- More Precise CPU Load Indication in ClickHouse Client
- Support for Subcolumns of Nested Types from S3 Storage with Parquet, Arrow, and ORC Formats
- Add table_uuid column to system.parts table
- Added Option to Display Locally Processed Rows in Non-Interactive Mode
- Implement Aggregation-in-Order Optimization for Query Plans
- Allow Collection of Profile Events in system.trace_log for Performance Analysis
- Add input_format_max_binary_string_size setting for RowBinary format
- ClickHouse: Fix HTTP Error Code Display in Exception Messages
- Correct Error Reporting in Multi-JOIN Queries
New Feature
17- Add BSONEachRow Input/Output Format to ClickHouse
- Add grace_hash JOIN algorithm support to ClickHouse
- Allow Configuring Password Complexity Rules for User Management
- Mask Sensitive Information in Logs and Query Outputs
- Add GROUP BY ALL Syntax Support
- Add FROM table SELECT column syntax
- Added concatWithSeparator and concat_ws functions for Spark SQL compatibility
- Added multiplyDecimal and divideDecimal functions for fixed precision decimal operations
- Added system.moves table for currently moving parts
- Add Embedded Prometheus Endpoint Support for ClickHouse Keeper
- Support Numeric Literals with Underscore Separators
- Added support for array as second parameter in cutURLParameter function
- Add Index Expression Column to system.data_skipping_indices Table
- Add engine_full Column to Databases System Table
- New xxh3 Hash Function Added and Performance Improvements for xxHash32 and xxHash64 on ARM
- Added Constraints for Merge Tree Settings in ClickHouse
- Add Setting to Parse Nested JSON Objects as Strings
Performance Improvement
8- Add Settings for MergeTree Performance Optimization
- Settings Issue with Adaptive Granularity in Concurrent Read for Remote Filesystems
- Optimized List Requests to ZooKeeper or ClickHouse Keeper for Part Merging
- Optimization Skipped for Small max_size_to_preallocate_for_aggregation Value
- Speed up server shutdown by skipping unnecessary old data cleanup
- Merging on Initiator Now Uses Memory Bound Approach for Aggregation Results
- Keeper Parallel Log Syncing Improvement
- Keeper Requests Batching Enhanced with New Configuration Setting