v.25.5
Backward Incompatible Change
6Bug Fix (user-visible misbehavior in an official stable release)
64- Fix renames of columns missing
- Materialized view can start too late, e.g
- Fix SELECT query rewriting during VIEW creation
- Fix applying async_insert from server
- Add replicas
- Fix refreshable materialized views breaking backups
- Fix old firing logical error
- Fix some cases where secondary
- Fix dumping profile events
- Fix analyzer producing LOGICAL\_ERROR
- Fix analyzer: CREATE VIEW
- Fix Block structure mismatch error
- Fix analyzer: with prefer\_global\_in\_and\_join=1
- Fixed several types of SELECT queries
- Fix conversion between different JSON types
- Fix logical error during convertion of Dynamic
- Fix column rollback on JSON parsing error
- Fix 'bad cast' error
- Allow prewhere in materialized view on columns
- Fix logical error during parsing of bad
- Throw an exception when the parquet batch
- Fix deserialization of variant discriminators with basic
- Dictionaries of type complex_key_ssd_cache now reject zero
- Avoid using Field
- Fix read from Materialized View with Distributed
- Fix a bug where arrayUnion() returned extra
- Fix segfault
- Fix for S3 ListObject
- Fix a bug where arrayUnion() returned extra
- Fix logical error after filter pushdown
- Fix NOSIGN
- Avoid triggering watches on failed multi requests
- Forbid Dynamic and JSON types in
- Fix check
- Fix SecureStreamSocket connection issues
- Fix loading of plain\_rewritable disks containing data
- Fix crash
- Verify the table name's length only
- Fix error Block structure mismatch
- Fix two cases of "Logical Error: Can't
- Fix using
- Fix order by JSON column with other
- Fix result duplication
- Fix crash
- Resolve macros for autodiscovery clusters
- Handle incorrectly configured page\_cache\_limits suitably
- Fix the result of SQL function
- IcebergS3 supports count optimization, but IcebergS3Cluster does
- Fix AMBIGUOUS\_COLUMN\_NAME error with lazy materialization
- Hide password for query CREATE DATABASE datalake
- Allow to specify an alias in JOIN
- Allow materialized views with UNIONs
- Format specifier %e in SQL function parseDateTime
- Fix warnings Cannot find 'kernel'
- Avoid stack overflow crash
- Fix race during SELECT from system
- Fix: lazy materialization in distributed queries
- Fix Array(Bool) to Array(FixedString) conversion
- Make parquet version selection less confusing
- Fix ReservoirSampler self-merging
- Fix storage
- Fix the destruction order of data members
- Enable_user_name_access_type must not affect DEFINER access type
- Query to system database can hang if
Build/Testing/Packaging Improvement
7Experimental Feature
3New Feature
14- Support scalar correlated subqueries
- Vector search using the vector similarity index
- Support geo types
- New functions sparseGrams, sparseGramsHashes, sparseGramsHashesUTF8, sparseGramsUTF8
- Clickhouse-local (and its shorthand alias, ch) now use an implicit FROM table when there is input data for processing
- Add stringBytesUniq and stringBytesEntropy
- Add functions for encoding and decoding base32
- Add getServerSetting and getMergeTreeSetting
- Add new iceberg_enable_version_hint
- Gives the possibility to truncate specific tables
- Support _part_starting_offset virtual column
- Add functions divideOrNull,moduloOrNull, intDivOrNull,positiveModuloOrNull
- Clickhouse vector search now supports both pre-filtering
- Add icebergHash and icebergBucket
Improvement
52- Add an ability to apply lightweight deletes
- If data in the pretty format is
- Extend the isIPAddressInRange function
- Allow changing PostgreSQL engine connection pooler settings
- Allow to specify _part_offset in normal projection
- Add new columns (create_query and source)
- Add a new field condition to system
- Vector similarity indexes can now be created
- Support unix timestapms with fractional part
- Add tests for schema evolution
- Improve insert
- Tokens function was extended
- SHOW CLUSTER statement now expands macros
- Support NULLs
- Update cctz to 2025a
- Change the default stderr processing
- Make tabs undo-able in the Web UI
- Remove settings during recoverLostReplica same as it
- Add profile events: ParquetReadRowGroups and ParquetPrunedRowGroups
- Support ALTERing database on cluster
- Skip missed runs of statistics collection
- Some small optimizations for reading Arrow-based formats
- Setting allow_archive_path_syntax was marked as experimental
- Made page cache settings adjustable
- Do not print number tips
- Colors of graphs on the advanced dashboards
- Add asynchronous metric, FilesystemCacheCapacity - total capacity
- Optimize access to system
- Calculate the relevant fields
- Allow to specify storage settings
- Support local storage
- Add a query level
- Fix possible endless loop
- Add filesystem cache
- For clickhouse-benchmark reconfigure reconnect option
- Allow ALTER TABLE
- Vector similarity index is now also used
- Add last\_error\_message, last\_error\_trace and query\_id to
- Enable sending crash reports by default
- System table system.functions now shows
- Add access_control_improvements
- Proper implementation of ASTSelectWithUnionQuery::clone() method now takes
- Fix the inconsistent
- Improve in JSON type parsing
- Add setting s3_slow_all_threads_after_network_error
- Logging level about the selected parts
- Add runtime/share in tooltips and status messages
- Trace-visualizer: load data from clickhouse server
- Add metrics on failing merges
- Clickhouse-benchmark will display percentage based
- Add system
- Add tool for query latency analyzing
Performance Improvement
22- Change the Compact part format
- Allow moving conditions with subcolumns
- Speed up secondary indices
- Enable compile_expressions
- New setting introduced: use_skip_indexes_in_final_exact_mode
- Improve cache locality
- Improve performance of S3Queue/AzureQueue
- Introduced threshold
- Now we use number of replicas
- Allow parallel merging of uniqExact states during
- Fix possible performance degradation of the parallel
- Reduce the number of List Blobs API
- Fix performance of the distributed
- Prevent LogSeriesLimiter from doing cleanup
- Speedup queries with trivial count optimization
- Better inlining for some operations with Decimal
- Set input_format_parquet_bloom_filter_push_down to true by default
- Optimized ALTER
- Avoid extra copying of the block during
- Add setting input_format_max_block_size_bytes to limit blocks created
- Remove guard pages for threads and async\_socket\_for\_remote/use\_hedge\_requests
- Lazy Materialization with parallel replicas