v.25.9
Backward Incompatible Change
4Bug Fix (user-visible misbehavior in an official stable release)
83- Results of alter queries are only validated
- Limit the number of tasks of each
- Shutdown tables properly when recovering database replica
- Check access rights during typo correction hints
- 1
- Prevent unnecessary optimization of the first argument
- Mapping between iceberg source ids and parquet
- Fix reading file size separately from opening
- ClickHouse Keeper no longer fails
- PR closes
- Fix null pointer
- Allow correlated subqueries in the FROM clause
- Fix alter update of a column
- Forbid altering columns whose subcolumns are used
- Fix reading subcolumns with non-default column mapping
- Fix using wrong default values
- DataLake hive catalog url parsing
- Fix logical error during filesystem cache dynamic
- Use NonZeroUInt64 for logs_to_keep in DatabaseReplicatedSettings
- Skip index if the table
- Fix problems with parsing of Iceberg partition
- Fix crash
- Process includes from /etc/metrika.xml as a default
- Fix accurateCastOrNull/accurateCastOrDefault from String to JSON
- Support directories without '/'
- Fix crash with replaceRegex, a FixedString haystack
- Fix crash during ALTER UPDATE Nullable
- Fix missing column definer
- Fix cast from LowCardinality(Nullable(T)) to Dynamic
- Fix logical error during writes to DeltaLake
- Fix 416 The range specified
- Fix GROUP BY Nullable
- Fix a bug
- Fail if all replicas are unavailable
- Fix leaking of MergesMutationsMemoryTracking due to Buffer
- Fix show tables after dropping reference table
- Fix missing chunk header
- Fix possible deadlock
- Fix reading subcolumns
- Avoid collision when processing DDL tasks
- Fix detach/attach
- Fix use of uninitialized memory
- Functions searchAny and searchAll when called
- Fix function timeSeriesResampleToGridWithStaleness
- Fix crash caused by merge_tree_min_read_task_size being set
- While reading takes format
- Avoid SIGSEGV
- Fix Backup db engine raising exception
- Fix missing chunk header
- Fix S3Queue logical error "Expected current processor
- Nullablity bugs in insert and pruning
- Disable file system cache if Iceberg metadata
- Fix 'Deadlock
- Support IPv6 in listen_host
- Fix shutdown
- Fix distributed queries with describe_compact_output=1
- Fix window definition parsing and applying query
- Fix exception Partition strategy wildcard can not
- Fix LogicalError if parallel queries are trying
- Add some additional validations in ColumnObject
- Fix empty Tuple permutation with limit
- Do not use separate keeper node
- Fix TimeSeries engine table breaking creation
- Fix querying system
- Fix seeking at the end of
- Process exception which is thrown during asyncronous
- Fix saving of big preprocessed XML configs
- Fix date field populating
- Fix infinite recalculation of TTL with WHERE
- Fix possible
- Fix resolving table schema with url() table
- Correctly cast output of PREWHERE after splitting
- Fix lightweight updates with ON CLUSTER clause
- Fix compatibility of some aggregate function states
- Fix an issue where model name
- EmbeddedRocksDB: Path must be inside user\_files
- Fix KeeperMap tables created before 25
- Fix maps and arrays field ids reading
- Fix reading array with array sizes subcolumn
- Fix CASE function with Dynamic arguments
- Fix reading empty array from empty string
- Fix possible wrong result of non-correlated EXISTS
- Throws an error if iceberg\_metadata\_log is not
Build/Testing/Packaging Improvement
7Experimental Feature
3New Feature
21- Users can now use NATS JetStream
- Add support for authentication and SSL
- Add new parameter to S3 table engine
- Update for Iceberg table engine
- Add system table iceberg_metadata_log to retrieve Iceberg
- Support custom disk configuration via storage level
- Support Azure
- Support Unity catalog on top of Azure
- Support more formats
- Add a new system table database_replicas
- Add function arrayExcept that subtracts one array
- Adds a new system.aggregated_zookeeper_log table
- New function, isValidASCII
- Boolean settings can be specified without arguments
- Allow for overriding the log level during
- Aggregate functions timeSeriesChangesToGrid and timeSeriesResetsToGrid
- Allow users
- Add warnings for CPU and memory usage
- Support the oneof
- Improve allocation profiling based on jemalloc's
- New setting to delete files
Improvement
40- Support writing multiple data files
- Add rows/bytes limit for inserted data files
- Support more types
- Make S3 retry strategy configurable and make
- Allow it to survive zookeeper connection loss
- You can use query parameters after
- Give more clear instruction for users
- It's no longer possible
- Simplified (and avoided some bugs) a logic
- Add deltaLakeAzureCluster
- Apply azure_max_single_part_copy_size setting for normal copy operations
- Slow down S3 client threads
- Mark settings allow\_experimental\_variant/dynamic/json and enable\_variant/dynamic/json as obsolete
- Support filtering by complete URL string
- Add a new
- Fix detection of systemd
- Add a new startup_scripts_failure_reason dimensional metric
- Allow to omit identity function
- Add ability to enable JSON logging only
- Allow using native numbers in WHERE
- Fix error
- Add extra retries for disk access check
- Make the staleness window
- Add FailedInternal*Query profile events
- Add via config file
- Add asynchronous metric for memory usage
- You can use clickhouse-benchmark --precise flag
- Make nice values of Linux threads configurable
- Fix misleading “specified upload does not exist”
- Limit query plan description
- Add ability to tune pending signals
- Improve performance of RemoveRecursive request
- Remove extra whitespace
- Remove for plain rewriteable disk
- Support performance tests against remote ClickHouse
- Respect memory limits in some places
- Throw an exception if setting network_compression_method is
- System table system.query_cache now returns *all* query
- Enable short circuit evaluation
- Add a new column statistics in system
Performance Improvement
12- Support filtering data parts using skip
- Added JOIN order optimization
- Distributed INSERT SELECT for data lakes
- Improve PREWHERE optimization
- Implemented rewriting of JOIN: 1
- Improved performance of vertical merges after executing
- HashJoin performance optimised slightly
- Radix sort: help the compiler use SIMD
- Improve performance of short queries with lots
- Improved performance of applying patch parts
- Add setting query_condition_cache_selectivity_threshold
- Reduce memory usage