v.25.11
Backward Incompatible Change
10- Remove deprecated Object type
- Remove the obsolete LIVE VIEW feature
- Versions, the Geometry type was aliased
- Escape filenames created for Variant type subcolumns
- Enable with_size_stream serialization
- Support exception tagging
- Prohibit the creation of multiple plain-rewritable disks
- Fix Kafka storage SASL settings precedence
- Parquet no-timezone timestamps (isAdjustedToUTC=false) are now read
- Small improvement for T64 codec: it no
Bug Fix (user-visible misbehavior in an official stable release)
97- Fix multiIf with constant arguments and short-circuit
- Fix logical error
- Fix a bug
- Fix sometimes missing columns
- Add subcolumn with parallel replicas
- Writer, emit created_by string
- Fix phi-squared computation causing
- Fix reading mixed array of Floats and
- Using shared\_ptr for QueryState in TCPHandler
- Fix logical error
- Fix 1
- After recovering, a Replicated database replica might
- Fix possible "Context has expired" with new
- Fix a segmentation fault
- Fix incorrect min(PK)/max(PK) result
- Fix propagation of size restrictions
- Fix top_k to respect the threshold parameter
- ArrowFlight endpoint sources that required an SSL
- Add via ALTER
- Fix bug in the function reverseUTF8
- Fix icebergS3Cluster protocol
- Disable parallel\_replicas\_support\_projection
- Propagate context on internal casts
- Fix getting file
- Do not check access SET DEFINER :definer
- Fix LOGICAL_ERROR
- Fix crash
- This closes
- Fix performance degradation
- Fix ACCESS\_ENTITY\_NOT\_FOUND error
- Fix sparse columns processing by CHECK constraint
- Fix incorrect row count
- Prevent TTL merge counter leaks
- Fix calculation of buffer size needed
- Fix use-after-free
- Avoid possible data-races due to mutable exceptions
- Fix rare server crash if source table
- Flush buffers when sending an error
- Prevent query masking rules
- Fix incorrect row count
- Support estimate data type of LowCardinality(Nullable(String)) LOGICAL\_ERROR
- Possible crash/undefined behavior in IN function
- Fix truncating arguments of countIf
- Avoid losing uncompressed checksums
- Fix LOGICAL\_ERROR
- Fix loading tables
- Fix incorrect merge handling of TTL-emptied parts
- Fix logical error
- Fix reading of changelogs during Keeper startup
- Fix incorrect JOIN results
- Fix possible "Context has expired" with analyzer
- Fix MaterializedPostgreSQL replication
- Fix a crash
- Fix crash
- Fix logical error with query\_plan\_convert\_join\_to\_in
- Fix exception
- Add runtime filters only
- Fix hasAnyTokens, hasAllTokens and tokens functions concurrent
- Fix logical error/crash with join runtime filter
- Fix possible logical error during ARRAY JOIN
- Avoid crash due to reading from remote
- Fix race condition
- Fix bug in projection
- Fix Paimon table function handling
- Fix possible logical error during reading
- Fix possible stack overflow
- Fix logical error with empty tuple
- Remove injective functions
- If the merge was interrupted by, for
- Fix logical error with empty tuple
- Now ClickHouse will show data lake catalog
- Fix using native copy on GCS
- Fix buffer size calculation
- Fix wrong escaping
- Fix URL validation
- Fix possible crash during remote query
- Fix inference of bad DateTime64 values
- Fix logical error caused by empty tuple
- Backported in #90457: Do size checks
- Fix possible 'Invalid number of rows
- Fix possible error Column with Array type
- Allow files starting with dots in user\_files
- Fix logical error and modulo bug
- Fix integer overflow
- Fix hive partitioning
- Fix possible
- Fix crash
- Handle implicit conversion from a string
- Fix incorrect
- Fix a row-count mismatch
- Fix bug in reading subcolumns from JSON
- Fix trim, ltrim, rtrim functions not working
- Fix possible logical error
- Fix a bug
- Fix incorrect distance calculations
- Fix logical error caused by a rare
- Fix CoalescingMergeTree
Build/Testing/Packaging Improvement
5Experimental Feature
4New Feature
22- Geometry Data Type
- Add new SQL statement EXECUTE AS
- naiveBayesClassifier Function
- Fractional LIMIT and OFFSET
- ClickHouse Subsystem for Microsoft OneLake catalog
- flipCoordinates Function
- Add system.unicode table
- Add a new MergeTree
- Add support for the cume_dist window
- Add a new argument preprocessor in text
- Adds a memory_usage field
- Add setting into_outfile_create_parent_directories to automatically create parent
- Support CREATE OR REPLACE syntax
- Support arrayRemove to remove all elements equal
- Introduce midpoint scalar function that calculates average
- Web UI now provides a download button
- Add arrow_flight_request_descriptor_type
- New aggregate functions argAndMin and argAndMax
- Settings to write and verify parquet checksums
- Add kafka_schema_registry_skip_bytes
- Add h3PolygonToCells
- Add new virtual column _tags
Improvement
64- UNION should unify types
- Roles defined in SQL can now be
- Add new is_internal column to system
- Add support for inverse IS DISTINCT
- Clickhouse-client and clickhouse-local in the interactive mode
- Output format-related settings now don't affect query
- HTTP interface will provide Age and Expires
- Allow inserting into remote and data lake
- Add query SYSTEM DROP TEXT INDEX CACHES
- Enable enable_shared_storage_snapshot_in_query by default
- Add send_profile_events
- Allow disabling background download of nearby segments
- Allow FETCH PARTITION when there are broken
- Fix uncaught exception while getting MySQL table
- All DDL ON CLUSTER queries now execute
- Add support for UUID in Parquet when
- Disable ThreadFuzzer
- Make query plan optimizations visible
- Replace TABLE queries
- Support JSON and Dynamic types
- Implement missing parts of the ArrowFlight server
- Add multiple histogram metrics
- Add input_headers option to EXPLAIN query
- Adds profile events to count the number
- Add two settings: merge tree
- Fix binary deserialization of Array and Map
- Introduced a LockGuardWithStopWatch class and used it
- Allow using opt-in AWS regions
- User can now cancel the query
- Web UI will display bars
- Reduce the amount of metadata SharedMergeTree stores
- Make S3Queue respect disable_insertion_and_mutation server setting
- Set default s3_retry_attempts to 500
- Kafka_compression_codec and kafka_compression_level settings can now be
- Add a new column statistics in system
- Improve error message when generic expansion is
- Allow using a replicated\_table as a data
- Queries starting with whitespace are no longer
- Support Array of String as
- Modify how plain-rewritable disks store their metadata
- Subqueries which take part inside
- Enable create_table_empty_primary_key_by_default by default
- Fix incorrect code
- Versions, the setting create_table_empty_primary_key_by_default was ineffective
- Update chdig to v25
- Make the resizer of the query textarea
- Improved memory tracking in hash joins result
- Async server log: Flush earlier and increase
- Fix wrong FilesystemCacheBytes
- Clarified description of some columns in system.view_refreshes
- Cache S3 credentials interacting
- Fix runtime filter pushdown
- If the system memory is lower than
- Type hints in the Web UI no
- Show table properties in Web UI
- Support non_replicated_deduplication_window
- Add a possibility to set a list
- Store deduplication blocks ids in the system.part_log
- Changed default of filesystem cache setting keep_free_space_remove_batch
- Introduce TTL DROP merge type, and do
- Use lower node limit
- Make SYSTEM FLUSH LOGS query wait
- Fix incorrect rows_before_limit_at_least
- Fix 0 rows
Performance Improvement
27- Parquet reader v3 is enabled by default
- Distributed execution: better split tasks
- RIGHT and FULL JOINs now use ConcurrentHashJoin
- Optimization for large values of constant expressions
- Up to 8x faster SELECT queries
- Parallel Merge FOr Small Group By
- Allow using projections as secondary index
- Fix VDSO for rare Aarch64 systems and
- Improve LZ4 decompression speed
- Fix and automatically scales to high request
- Improved text index performance
- Queries can now benefit
- Use aggregate projection for queries with DISTINCT
- Improve the performance
- Run streaming LIMIT BY transform
- Allow rewriting ANY LEFT JOIN or ANY
- Reduce the overhead of logging: use less
- Add filter steps over other
- Slightly speed up some uniqExact operations
- Increase the limit for lazy materialization rows
- Enable setting allow_special_serialization_kinds_in_output_formats by default
- Add parallelism for ALTER TABLE
- Add cache for bcrypt authentication
- Skip index used
- Optimization enable_lazy_columns_replication is now the default
- Introduce a per-table cache of ColumnsDescription
- Improve query performance