v.25.8
Backward Incompatible Change
11- Infer Array(Dynamic) instead of unnamed Tuple
- Move S3 latency metrics to histograms
- Require backticks around identifiers with dots
- Avoid maintenance without analyzer, β which
- Write values of Enum type as BYTE_ARRAY
- Enable MergeTree setting write_marks_for_substreams_in_compact_parts by default
- Previous concurrent_threads_scheduler default value was round_robin
- ClickHouse supports PostgreSQL-style heredoc syntax: $tag$ string
- Update to properly validate AZURE permissions
- Enable allow_dynamic_metadata_for_data_lakes setting
- Disable quoting 64 bit
Bug Fix (user-visible misbehavior in an official stable release)
119- Fix the metadata resolution
- Fix markReplicasActive
- Fix rollback of Dynamic column on parsing
- If function trim called
- Fix logical error with duplicate subqueries
- Fix incorrect result of queries with WHERE
- Historically, gcs function did not require any
- Skip unavailable nodes during
- Fix write with append
- Mask Avro schema registry authentication details
- Fix the issue where, if a MergeTree
- Fix parquet writer outputting
- Fix sort of NaN values
- When restoring from backup, the definer user
- Fix crash
- Allow referencing any table in view
- Onprogress call in jsoneachrowwithprogress is synchronized
- This closes
- Fix colorSRGBToOKLCH/colorOKLCHToSRGB
- Fix writing JSON paths with NULL values
- Overflow large values (>2106-02-07) when casting
- Always apply filesystem_prefetches_limit
- Fix rare bug
- Fix the logical error Expected single dictionary
- Fix crash with clickhouse client
- Fix wrong results
- Handle exceptions properly in periodic parts refresh
- Fix filter merging
- Fix rare clickhouse crash
- Fix deadlock caused by background cancellation checker
- Fix infinite recursive analysis
- Fix a bug that was causing
- Fix incorrect construction of empty tuples
- Fix LOGICAL_ERROR
- Previously, set indexes didn't consider Nullable columns
- Now ClickHouse read tables
- Do not try to substitute table functions
- Fix logger usage
- Fix a logical error
- Codec DoubleDelta codec can now only be
- Comparison against nan value was not using
- Fix reading Variant column with lazy materialization
- Make zoutofmemory hardware error, otherwise it will
- Fix server crash
- Fix out-of-order writes to Keeper changelog
- Remove from table MergeTree will do nothing
- Parallel distributed INSERT SELECT
- Fix pruning files by virtual column
- Fix leaks
- Fix ALTER MODIFY ORDER BY not validating
- Change pre-25.5 value of allow_experimental_delta_kernel_rs
- Stops taking schema from manifest files but
- Fix issue where Keeper setting rotate_log_storage_interval =
- Fix logical error from S3Queue "Table is
- Lock 'mutex' when getting zookeeper
- Fix CORRUPTED_DATA error
- Fix column pruning with delta-kernel
- Refresh credentials in delta-kernel in storage DeltaLake
- Fix starting superfluous
- Fix issue where querying a delayed remote
- Ngram and no_op tokenizers no longer crash
- Fix lightweight updates
- Correctly store all settings
- Fix total watches count returned by Keeper
- Fix lightweight updates
- Fix lightweight updates
- Fix column name generation
- Fix memory tracking drift from background schedule
- Fix potential
- Implement missing APIs
- Add a check if a correlated subquery
- Now Iceberg doesn't try
- Fix double-free
- Improve error message on attempt to create
- Fix cleanup of patch parts
- Fixing illegal\_type\_of\_argument in mv
- Fix segfault
- Fix recovering replicated databases
- Fix Not-ready Set
- Get rid of unnecessary getStatus() calls during
- Fix race in DeltaLake engine delta-kernel implementation
- Fix reading partitioned data with disabled delta-kernel
- Add missing table name length checks
- Fix the creation of RMV on
- Fix iceberg writes
- Writing lower and upper bounds are not
- Fix logical error while reading
- Fix backup of parts with broken projections
- Forbid using _part_offset column in projection
- Fix crash and data corruption during ALTER
- Queries with parallel replicas
- Fix possible UB
- Fix incorrect metrics KafkaAssignedPartitions and KafkaConsumersWithAssignment
- Fix processed bytes stat being underestimated
- Fix early return condition
- Fix the metadata resolution
- Fix rare crash
- Parameters like date\_time\_input\_format were ignored
- Fix secrets masking
- Fix precision loss
- Fix LOGICAL_ERROR
- Fix reading count from cache
- Fix coalescing merge tree segfault
- Update metadata timestamp in iceberg writes
- Using distributed_depth as an indicator of \*Cluster
- Spark can't read position delete files
- Fix send\_logs\_source\_regexp
- Fix possible
- Support global constants from WITH statement
- Mask credentials for deltaLakeAzure, deltaLakeCluster, icebergS3Cluster and
- Fix logical error on attempt to CREATE
- Fix HTTP requests made by the url
- Now unity catalog will ignore schemas
- Fix nullability of fields
- Fix a bug
- Fix backup restores failing due to BACKUP_ENTRY_NOT_FOUND
- Add checks for sharding\_key during ALTER
- Don't create empty iceberg delete file
- Fix large setting values breaking S3Queue tables
Build/Testing/Packaging Improvement
5Experimental Feature
6New Feature
45- Support the PromQL dialect is added
- AI Powered SQL generation can now infer
- Support ArrowFlight RPC protocol by adding: -
- Support the _table virtual column
- Allow to use any storage policy
- Implement AWS S3 authentication with an explicitly
- Support position deletes
- Support Iceberg Equality Deletes
- Iceberg writes for create
- Glue catalogs for writes
- Iceberg Rest catalogs for writes
- Merge all iceberg position delete files into
- Support drop table
- Support alter delete mutations
- Support writes
- Allow reading specific snapshot version in table
- Write more iceberg statistics
- Support add/drop/modify columns
- Support writing version-hint file
- Views, created by ephemeral users, will now
- Vector similarity index now supports binary quantization
- Allow key value arguments in s3 or
- New system table to keep erroneous incoming
- New SYSTEM RESTORE DATABASE REPLICA
- PostgreSQL protocol now supports the COPY command
- Support C# client
- Add support for hive partition style reads
- Add zookeeper_connection_log system table to store historical
- Enable preemptive CPU scheduling
- Drop TCP connection after a configured number
- Support using projections
- Support DESCRIBE SELECT
- Force secure connection for mysql\_port and postgresql\_port
- Users can now do case-insensitive JSON key
- Introduction of system.completions table
- Add a new
- Add extra\_credentials to AzureBlobStorage to authenticate
- Add function dateTimeToUUIDv7 to convert a DateTime
- TimeSeriesDerivToGrid and timeSeriesPredictLinearToGrid aggregate functions
- Add two new TimeSeries
- Add GRANT READ ON S3
- Add Hash as a new output format
- Add ability to set up arbitrary watches
- Enable a mode with a gradual
- Support partially aggregated metrics
Improvement
75- Add database_replicated
- Made the table columns
- Support compressed
- Show the number of ranges
- Introduce settings
- Add columns_substreams
- Add a CLI flag --show\_secrets to clickhouse
- S3 read and write requests are throttled
- Allow to mix different collations
- Add a tool to simulate, visualize and
- Add support of remote* table
- Set all log messages
- User-defined functions with unusual names and codecs
- Users can now use Time and Time64
- Joins with parallel replicas now use
- Fix compatibility
- Support changing mv
- Add profile event MutationAffectedRowsUpperBound that shows
- Use information from cgroup
- MongoDB: Implicit parsing of strings
- Highlight digit groups in Pretty formats
- Dashboard: the tooltip will not overflow
- Slightly better-looking dots on the dashboard
- Dashboard now has a slightly better favicon
- Web UI: Give browsers a chance
- Add support for applying extra ACL
- Fix usage of "compact" Variant discriminators serialization
- Add a server
- Add a setting json_type_escape_dots_in_keys to escape dots
- Check if connection is cancelled before checking
- Slightly better colors of text selection
- Improved server shutdown handling
- Added a setting delta_lake_enable_expression_visitor_logging
- Cgroup-level and system-wide metrics are reported now
- Slightly better charts in Web UI
- Change the default of the Replicated database
- Fix formatting of CREATE USER with query
- Introduce backup_restore_s3_retry_initial_backoff_ms, backup_restore_s3_retry_max_backoff_ms, backup_restore_s3_retry_jitter_factor
- S3Queue ordered mode fix: quit earlier if
- Support iceberg writes to read from pyiceberg
- Allow set values type casting
- Bump chdig to 25.7.1.
- Low-level errors during UDF execution now fail
- Add get_acl command to KeeperClient
- Adds snapshot version to data lake table
- Add a dimensional metric for the size
- System.columns table now provides column as
- New MergeTree setting search_orphaned_parts_drives
- Add 4LW in Keeper, lgrq, for toggling
- Match external auth forward\_headers in case-insensitive way
- Encrypt_decrypt tool now supports encrypted ZooKeeper connections
- Add format string column to system
- Update clickhouse-format to accept --highlight as
- Fix iceberg reading by field ids
- Introduce a new backup_slow_all_threads_after_retryable_s3_error setting
- Skip creating and renaming the old temp
- Limit Keeper log entry cache size
- Allow using simdjson on unsupported architectures
- Add introspection
- Remove objects to execute single object storage
- Iceberg's current implementation of positional delete files
- Fix leftovers on the screen, fix crash
- Add missing partition_columns_in_data_file to azure configuration
- Allow zero step in functions timeSeries*ToGrid This
- Add show\_data\_lake\_catalogs\_in\_system\_tables flag to manage adding data
- Add support for macro expansion in remote_fs_zero_copy_zookeeper_path
- AI in clickhouse-client will look slightly better
- Enable trace\_log
- Support resolution of more cases
- Ignore UNKNOWN\_DATABASE while obtaining table columns sizes
- Add a limit
- Add a parameter column to system
- Fix parsing of a trailing comma
- Support inner arrays
- All the allocations done
Performance Improvement
37- New parquet reader implementation
- Replaced the official HTTP transport In Azure Blob Storage
- Processes indexes in increasing order of file size
- Enable MergeTree setting write_marks_for_substreams_in_compact_parts by default
- Avoid throttling
- ALL LEFT/INNER JOINs will be automatically converted
- Add max_joined_block_size_bytes in addition to max_joined_block_size_rows
- Add new logic
- Allow the optimizer
- Process max_joined_block_rows outside of hash JOIN main
- Process higher granularity min-max indexes first
- Fix a bug
- Vector search queries using a vector similarity
- Improve cache locality of workload distribution among
- Implement addManyDefaults
- Calculate serialized key columnarly when group
- Eliminated full scans for the cases
- Try -falign-functions=64 in attempt
- Bloom filter index is now used
- Reduce unnecessary memcpy calls
- Optimize largestTriangleThreeBuckets by removing temporary data
- Optimize string deserialization by simplifying the code
- Fix the calculation of the minimal task
- Improved performance of applying patch parts
- Remove zero byte
- Optimize the materialization of constants
- Improve parallel files processing with delta-kernel-rs backend
- New setting, enable\_add\_distinct\_to\_in\_subqueries, has been introduced
- Reduce query memory tracking overhead
- Implement internal delta-kernel-rs filtering
- Disable skipping
- Allocate the minimum amount of memory needed
- Support bloom filter
- Reduce contention on storage lock
- Add missing
- Allow asynchronously iterating objects from Iceberg table
- Execute non-correlated EXISTS as a scalar subquery