v.25.4
Backward Incompatible Change
4Bug Fix (user-visible misbehavior in an official stable release)
58- Fix incorrect projection analysis
- Fix Part does not contain
- Fix not working skip
- Fix a bug
- Fix receiving messages from NATS server without
- Fix logical error while reading from empty
- Use default format settings
- Fix checking if the table data path
- Fix sending constant values to remote
- Fix a crash because of expired context
- Hide credentials in RabbitMQ, Nats, Redis, AzureQueue
- Fix undefined behaviour on NaN comparison
- Regularly check if merges and mutations were
- Add replicas
- Fix possible crash
- Fix crash that happens
- Disable fuzzy search
- Fix a bug that a vector search
- Fix a minuscule error "The requested output
- Fix of a bug
- Allow specifying an empty session_id query parameter
- Fix metadata override
- Fix crash
- Do not try to create history\_file
- Fix system
- Fix for checks
- Fix possible crash due to concurrent S3Queue
- GroupArray* functions now produce BAD_ARGUMENTS error
- Remove before it's detached
- Fix the fact that "alterable" column
- Mask Azure access signature
- Fix prefetching of substreams with prefixes
- Fix crashes /
- Fix delta-kernel-rs auth options
- Not schedule Refreshable Materialized Views task if
- Validate access to underlying tables
- FINAL modifier can be ignored
- BitmapMin returns the uint32\_max
- Disable parallelization of query processing right after
- Set at least one stream
- Fix logical error "Cannot unregister: table uuid
- ClickHouse is now able
- -Cluster table functions were failing
- Better checks when transactions are not supported
- Cleanup query settings during attach
- Fix a crash
- Fix the case
- Fix a problem
- Fixed several types of SELECT queries
- Don't block table shutdown while running CHECK
- Fix ephemeral count
- Fix bad cast
- Fix the consistency
- Dictionaries of type ssd_cache now reject zero
- Fix crash
- Fix parsing of bad DateTime values
- Avoid triggering watches on failed multi requests
- Fix reading Iceberg table failed
Build/Testing/Packaging Improvement
1Experimental Feature
2New Feature
22- Add CPU slot scheduling for workloads, see
- Clickhouse-local will retain its databases after restart
- Reject queries when the server is overloaded
- Add setting to query Iceberg tables as of a specific timestamp
- An in-memory cache for Iceberg metadata
- Support DeltaLake table engine
- Add an in-memory cache for deserialized vector
- Support partition pruning For DeltaLake
- Support a background refresh
- Support using custom disks to store databases
- Support ALTER TABLE
- Inline Credentials For Kafka
- Allow setting default_compression_codec
- Bind Host In Clusters Configuration
- Introduce a new column, parametrized_view_parameters in system
- Allow changing a database comment
- Support SCRAM-SHA-256 authentication
- Add functions arrayLevenshteinDistance, arrayLevenshteinDistanceWeighted, and arraySimilarity
- Setting parallel_distributed_insert_select makes effect
- Introduce toInterval function
- Add several convenient ways to resolve root
- Support password based auth
Improvement
35- Serialize query plan for Distributed queries
- Support JSON type and subcolumns reading
- Support ALTER DATABASE
- Refreshes of refreshable materialized views now appear
- User-defined functions (UDFs) can now be marked
- Enabled a backoff logic
- Add query_id to system
- Support converting UInt128 to IPv6
- Don't parse special Bool values
- Support configurable per task waiting time
- Implement comparison
- Support by system
- Add validation for
- Add config enable_hdfs_pread to enable or disable
- Add profile events for number of zookeeper
- Allow creating and inserting into temporary tables
- Decrease max_insert_delayed_streams_for_parallel_write
- Fix year parsing
- Attaching parts of MergeTree tables will be
- Query masking rules are now able
- Add column index_length_column to information_schema
- Introduce two new metrics
- Fix incorrect S3 URL parsing
- Fix incorrect values of BlockActiveTime, BlockDiscardTime, BlockWriteTime
- Respect loading_retries limit for errors during push
- Fix performance and progress bar
- Support include, from_env, from_zk
- Add a dynamic warning to the system
- Add field condition to system table system
- Allow an empty value
- Fix IN clause type coercion
- Do not check parts
- Make data types in used_data_type_families
- Cleanup settings during recoverLostReplica same as it
- Use insertion columns for INFILE schema inference
Performance Improvement
23- Optimize performance with lazy columns, that read
- Enabled the query condition cache by default
- Speed-up building JOIN result
- Merging Filters For Join Optimization
- Use dynamic sharding for JOIN
- Support Iceberg data pruning based on lower
- Implement trivial count optimization
- Reduce memory usage
- Disable filesystem_cache_prefer_bigger_buffer_size when the cache is used
- Now we use number of replicas
- Support asynchronous IO prefetch
- Improve performance
- Decrease the amount of Keeper requests
- Marginal optimization for running functions
- Optimize arraySort
- Reduce the consumption of locks
- Optimize s3Cluster performance
- Optimize order by single Nullable or LowCardinality
- Optimize memory usage of the Native
- Trivial optimization: do not rewrite count
- Skip indices
- Vector similarity index could over-allocate main memory
- Introduce a setting schema_type