v.25.12
Backward Incompatible Change
10- ALTER MODIFY COLUMN now requires explicit DEFAULT
- Ngram tokenizer will no longer return ngrams
- When alter column from String
- Remove settings allow_not_comparable_types_in_order_by/allow_not_comparable_types_in_comparison_functions
- Changed the default of setting check_query_single_value_result
- Fix around implicit
- Update clickhouse-client to return a non-zero exit
- Is now forbidden to create special MergeTree
- Fix functions bitShiftLeft and bitShiftRight to return
- Follow-up to
Bug Fix (user-visible misbehavior in an official stable release)
130- Fix some bugs with PREWHERE
- Initialize DNSResolver before use
- Fix reading subcolumns from a column
- Fix GenerateRandom engine crash on non-literal parameters
- Fix removing unused projection columns
- Fix incorrect sharding
- Fix issues
- Fix several issues caused by premature column
- Throw exception when temporary\_files\_buffer\_size is set
- Fix Bad get error that happened during
- Add subcolumn X
- Fix bugs in the theilsU and contingency
- Implement additional
- Fix possible crash during remote query
- Fix user-visible Misbehavior
- Fix inference of bad DateTime64 values
- Do size checks when deserializing data
- Enable TTL drop merges
- When Kafka table was created
- Fix possible error Column with Array type
- Fix a crash during the clean server
- Fix logical error and modulo bug
- Fix parquet writing not preserving original order
- Do not apply constant node optimization
- Fix hive partitioning
- Fix JSON Exception
- Fix a row-count mismatch
- Fix infinite nan/inf WITH FILL query
- Fix 'column not found' error with query\_plan\_use\_logical\_join\_step=0
- Fix some queries with aggregated projection optimization
- Fix bug in reading subcolumns from JSON
- Now ClickHouse will not use read-in-order optimization
- Time and Time64 now should respect timezones
- Fix a bug where SELECT CAST(CAST(now(), 'Time'
- Fix crash
- Fix cluster discovery updates
- Fix possible logical error
- Fix a bug
- Fix possible logical error
- Fix possible logical error during renaming and
- Fix parsing JSON/Dynamic/Variant values from HTTP parameters
- Fix a race condition
- Fix incorrect distance calculations
- Fix a bug where toDateTimeOrNull of
- Fix possible logical error during output
- Fix IPv4 parsing functions
- Retry to markReplicasActive when failing
- Fix logical error caused by a rare
- Fix thread sanitizer crashes
- Fix logical error
- Fix formatting
- Fix potential crash
- Fix analyzer validation
- Fix type-casting errors
- Fix issue where queries
- Fix segfault on query with EXISTS function
- Fix Logical error: 'Inconsistent AST
- Fix access validation
- Fix named collections hidden secrets to depend
- Disable enable_shared_storage_snapshot_in_query
- Fix duplicate data issue
- Fix possible inconsistent state of shared data and dynamic paths in JSON
- Fix ALTER MODIFY QUERY with dictGet() and
- Fix compatibility
- Fix background flush of Buffer
- Don't list contrib/ parent folder in system.licenses
- Fix high memory usage during reading JSON/Dynamic/Variant
- Fix buffer allocation
- Fix possible logical error upon receiving another
- Fix wildcard grants check
- Fix SummingMergeTree aggregation
- Fix handling global grants with wildcard revokes
- Fix possible infinite loop in azure list blobs
- Fix excessive Buffer flushes
- ..
- Fix bug in JSON
- Fix std::out\_of\_range
- Fix reading dynamic subcolumns from materialized columns
- Fix arrayFilter function not working
- Fix logical error
- Add via alter add column in old
- Fix merging JSON columns
- Fix possible inconsistent dynamic structure during writing in compact parts
- Fix parsing of subnormal float values
- Fix wrong schema
- Fix https://github
- Fix inserting
- Fix SYSTEM DROP FILESYSTEM CACHE ON CLUSTER
- Fix possible logical error "Bad cast
- Fix a crash
- Fix Alias table with empty args
- Currently, the setting is set
- Remove Sparse columns
- Fix hive partitioning bug
- Support but no column is HIERARCHICAL
- Fix crash
- Fix parallel writes triggered by MaterializedView
- Handle null values for Ytsaurus xml dictionaries
- Fix QBit type failing with query parameters
- Fix LOGICAL\_ERROR
- Fix possible datarace
- Fix logical error caused by asterisks argument
- Fix an overflow while reading from ORC
- Allow ALTERs
- Fix L2DistanceTransposed returning
- Fix a bug
- Fix increased memory usage
- Fix JOIN queries with view and enabled
- Fix delta lake setting delta_lake_snapshot_version which could
- Fix LOGICAL\_ERROR
- Fix block structure mismatch with queries using
- Fix logical error with join_use_nulls and multiple
- Fix in https://github
- Fix the ORC reader bug
- Fix a serialization
- Fix Directory '{}' does not exist
- Prevent crash when connecting to mongodb
- Fix "TOO\_MANY\_MARKS" error which could have happened
- Close https://github.com/clickhouse/clickhouse/issues/87417 the writing schema of v1
- Fix the wrong names of readWKT, readWKB
- Fix numerous logical errors, overflow and functional
- Fix incorrect results that could appear
- Fix system
- Fix UDF replace
- If there is no active host found
- Surround operators IN, NOT IN with parentheses
- Fix backup of KeeperMap and Memory tables
- Fix crash
- Fix logical error caused by using Nothing
- Fix possible crash
Build/Testing/Packaging Improvement
2Experimental Feature
4New Feature
22- Remove files
- S3/Azure Queue add setting commit_on_select
- Add instrumentation at runtime using XRay
- Allow the use of non-constant second arguments for `IN` operator
- Functions to calculate area and perimeter
- Implement dictGetKeys function that returns the dictionary
- Disable exceptions
- Support direct (nested loop) join
- Support ORDER In Iceberg Tables
- Projection level settings
- Add HMAC(algorithm, message, key) sql
- Add support for has
- New input output format Buffers
- Add a setting max_streams_for_files_processing_in_cluster_functions
- Data masking for Row-level security
- Add allow_reentry option to windowFunnel aggregate
- Keeper compatibility with zookeeper: create with statistics
- Support ZooKeeper persistent watches
- Index behavior on ALTER
- As Time and Time64 data types are
- Support reading DeltaLake CDF via deltaLake table
- Support negative
Improvement
46- Add a new
- Format, named tuples are now displayed as
- Add fields last_error_time, last_error_message, last_error_query_id and last_error_trace
- CLI client can now suppress the 'ClickHouse
- Add error message that the part was
- Add dependencies and missing_dependencies columns to system
- Now, table's default expressions work correctly
- Allow disabling of PSI_*_* async metrics collection
- Add support of sparse serialization for columns
- Plain-rewritable disk has its own implementation and
- Any exception in HTTP should never contain
- Add a keeper-server-side check during handshake
- Add kafka_consumer_reschedule_ms as a tunable Kafka table
- Add a new column parts_in_progress_names to system
- Retry network errors when S3 library parses
- Avoid overwhelming Prometheus
- Add support for loading ClickHouse Client configuration
- Add byte size limit for append request
- Add a setting for Iceberg to prevent
- Update warning messages when approaching guardrails limits
- Stream chunks in system.filesystem_cache table instead
- Fix bad exception message
- Remove when table parts are dropped or
- Bump chdig
- Now pre-signed URLs work with S3
- Text index now works with ReplacingMergeTree tables
- Avoid exposing the ClickHouse server version
- Now HTTP_CONNECTION_LIMIT_REACHED exception would be thrown
- Introduce system
- Disable parts of your query while testing
- Add system
- Add profile events FailedInitialQuery and FailedInitialSelectQuery
- Fix potential thread pool starvation
- Support JSON type
- Fix spurious memory limit errors
- Ngrams tokenizer can now be built
- Support storage settings
- Throw "not implemented" for truncate query
- Avoid getting DB::Exception: apache::thrift::transport::TTransportException: MaxMessageSize reached
- Add a setting insert_select_deduplicate
- Allow implicit type conversion when casting Array
- Add CapnProto message size limit
- Update for merges as well, this PR
- Fix client_info
- Refresh_certificates_task_interval parameter in ACME client configuration now
- Log parts events in system.part_log for system.*_log
Performance Improvement
30- Optimize ORDER BY...LIMIT N
- Better application of skip indexes
- Support keeping reading
- Improve the performance of lazily materialized columns
- Users should see lower latency
- Implement simple DPsize join reordering algorithm
- Fail fast when queries reach row limits
- Add constrains regarding select queries which can
- Prefetch keys during hash table iteration
- Optimize the histogram aggregate function by sorting
- Improved filtering performance for predicates
- Optimize repeated
- Improve topK aggregate function performance and behaviour
- Improved performance of Decimal comparison operations
- Support partition pruning
- Use advanced SIMD operations
- Improve JIT function performing
- Speed up T64 decompression via dynamic dispatch
- Optimize MergeTree reader
- Introduce an additional heuristic
- Improve query performance
- Speed up converting columns
- Speed up sorting of single numeric block
- Add optimization to remove unused columns
- Default value of query_plan_optimize_join_order_limit is changed
- Enable the setting allow_statistics_optimize by default, so
- Support JOIN runtime filters
- Reduce memory usage during merges
- Enable using
- Add S3 providers if GCP OAuth is