v.25.10
Backward Incompatible Change
12- Changed default schema_inference_make_columns_nullable setting
- Query result cache will ignore the log_comment
- Versions, queries with table functions named
- Forbid using the Dynamic type
- Storage_metadata_write_full_object_key server option is turned
- Decrease replicated_deduplication_window_seconds from one week down
- Rename setting query_plan_use_new_logical_join_step to query_plan_use_logical_join_step
- Allow the tokenizer parameter
- Renamed functions searchAny and searchAll
- Remove cache_hits_threshold
- Two slight changes to how min_free_disk_ratio_to_perform_insert and
- Enable async mode
Bug Fix (user-visible misbehavior in an official stable release)
103- Fix GeoParquet causing client protocol errors
- Fix resolving host-dependent functions like shardNum
- Fix incorrect handling of pre-epoch dates
- Fix ALTER COLUMN IF EXISTS commands failing
- Fix inferring Date/DateTime/DateTime64 on dates that are
- Fixes a crash where some valid user-submitted
- Support JSON/Dynamic types
- Fix result of function calculated
- Fix LOGICAL\_ERROR
- Fix data lake tables with a percent-encoded
- Fix incorrect IS NULL behavior on nullable
- Fix incorrect accounting of temporary data deallocations
- Function checkHeaders is now properly validating
- Makes the same behaviour of toDate and
- Fix logical error with parallel replicas
- Respect setting input_format_try_infer_variants in schema inference cache
- Make pathStartsWith only match paths under
- Fix logical errors
- Fix "Too large size passed to allocator"
- Fix lightweight updates with subqueries that read
- Fix move-to-prewhere optimization, which did not work
- Fix applying patches to columns with default
- Fix segmentation fault
- Fix EmbeddedRocksDB upgrade
- Fix direct reading from the text
- Prevent privilege with non-existent engine
- Ignore only not found errors
- Fix dictionaries with YTSaurus source and \*range\_hashed
- Fix creating an array of empty tuples
- Check for illegal columns during temporary table
- Never put hive partition columns
- Fix preparing reading
- Fix access validation on select and
- Allow creating data skipping index
- Avoid leaking of tracked memory
- Fixed a bug that might lead
- Exclude userspace page cache bytes
- Fix a bug
- Fix incorrect handling of command_read_timeout
- Fix incorrect SELECT \* REPLACE behavior
- Fix two-level aggregation
- Fix the generation of the output block
- Parallel replicas read mode could be chosen
- Fix handling of timestamp / timestamptz columns
- This closes
- Fix writing boolean values
- Fix unknown table error
- Fix reading null map subcolumn from Variants
- Fix handling error
- Fix several skip
- Fix applying "use\_native\_copy"
- ClickHouse crashes if ArrowStream file has non-unique
- Fix fatal using approx\_top\_k and finalizeAggregation
- Fix merge with projections
- Remove injective functions
- Fix for incorrect granules/partitions elimination
- Returns affected rows count after query
- Restrics using of filter pushdown
- Applies URI normalization before evaluation
- Fix logical error
- Fix "High ClickHouse memory usage" warning
- Fix possible data corruption
- Fix possible uncaught exception while reading system
- Fix crash
- Now ON CLUSTER queries will take less
- Now DDL worker cleanup outdated hosts
- Fix running ClickHouse w/o cgroups
- Do proper undo of the move directory
- Fix propagation of is_shared flag
- Fix a workload setting max_cpu_share
- Fix bug that very heavy mutations
- Now correlated subqueries will work
- Avoid trying
- Now datalakes catalogs will be shown
- Fix DatabaseReplicated to respect
- Positional arguments are now explicitly disabled
- Fix quadratic complexity
- Make ALTER COLUMN
- Add to the database
- Fix aggregation of sparse columns
- Fix "column not found" error
- Fix makes the setting to control attempts
- PR just for making compatibility
- Fix UBSAN
- Fix coalescing merge tree
- Forbid deletes for iceberg_format_version=1
- Fix the move operation of plain-rewritable disks
- Fix SQL SECURITY DEFINER with \*cluster functions
- Fix potential crash caused by concurrent mutation
- Fix reading from the text
- Poco::TimeoutException exception thrown from Poco::Net::HTTPChunkedStreamBuf::readFromDevice leads
- Backported in #88910: After recovering, a Replicated
- Fix appending to system
- Fixed a bug where converting DateTime64
- Fix "having zero bytes error" with s3
- Fix access validation on select
- Catch exceptions when async logging fails
- Fix top_k to respect the threshold parameter
- Fix bug in the function reverseUTF8
- Backported in #88980: Do not check access
- Fix LOGICAL_ERROR
- Fix crash
- Fix performance degradation
Build/Testing/Packaging Improvement
5Experimental Feature
2New Feature
19- Add support for negative LIMIT and negative OFFSET
- Alias engine creates a proxy
- Support of operator IS NOT DISTINCT
- Add an ability to automatically create statistics
- New bloom filter index for text, sparse_gram
- New conv function for converting numbers between
- Add LIMIT BY ALL syntax
- Add support for querying Apache Paimon
- Add studentTTestOneSample aggregate
- Aggregate function quantilePrometheusHistogram, which accepts the upper
- New system table for delta lake metadata
- Add ALTER TABLE REWRITE PARTS
- Add SYSTEM RECONNECT ZOOKEEPER command to force
- Limit the number of named collections through
- Add optimized case-insensitive variants of startsWith and
- Adds a way to provide WORKLOAD and
- Add a new table
- Add recursive variants of cp-cpr and mv-mvr
- Add session setting to exclude list of skip indexes from materialization on inserts
Improvement
66- Now the function generateSerialID supports a non-constant
- Add optional start_value parameter to generateSerialID
- Add --semicolons_inline option in clickhouse-format
- Allow configuring server-level throttling when the configuration
- MannWhitneyUTest no longer throws an exception
- Remove previous remote blobs if metadata transaction
- Fix optimization pass
- When HTTP clients set the header X-ClickHouse-100-Continue
- Mask S3 credentials in logs
- Make query plan optimizations visible
- Change for SYSTEM DROP DATABASE REPLICA: -
- Fix inconsistent
- Iceberg table state is not stored
- Make bucket lock in S3Queue ordered mode
- Provide hints when a user has
- Skip index analysis when there are no
- Allow disabling utf8 encoding
- Disable s3_slow_all_threads_after_retryable_error by default
- Rename table function arrowflight to arrowFlight
- Update clickhouse-benchmark to accept using - if
- Make flushing to system.crash_log
- Added a setting inject_random_order_for_select_without_order_by
- Improve joinGet error message so
- Add ability to check an arbitrary Keeper
- Redirect heavy ytsaurus requests to heavy proxies
- Fix rollbacks of unlink/rename/removeRecursive/removeDirectory/etc operations and also
- Add keeper_server
- Support --connection
- Now setting max_insert_threads will take effect
- Add histogram and dimensional metrics to PrometheusMetricsWriter
- Function hasToken now returns zero matches
- Add text index
- Add a new ZooKeeperSessionExpired metric which indicates
- Use S3 storage client
- Fix incorrect handling of settings max_joined_block_size_rows and
- Setting enable_http_compression is now the default
- Add a new entry in system
- Add from and to values to
- Add more information for performance tracking
- Filesystem cache improvement: reuse cache priority iterator
- Add ability to limit requests for Keeper
- Make clickhouse-benchmark to not include stacktraces
- Avoid utilizing thread pool asynchonous marks loading
- Allow create table/table functions/dictionaries with subset
- From now system.zookeeper_connection_log is enabled
- Make TCP and HTTP behavior consistent
- Remove custom MemoryPools for reading Arrow/ORC/Parquet
- Allow to create Replicated database without arguments
- Support to connect to TLS port
- Added a new profile event
- Enable the analyzer
- Internal query planning improvement: use JoinStepLogical
- Add alias for hasAnyTokens (hasAnyToken) and hasAllTokens
- Enable global sampling profiler
- Fix that is seen with copy and
- Make function lag case insensitive
- Allow clickhouse-local
- Add config keeper_server
- JSON columns are now pretty printed
- Store clickhouse-client files
- Fix memory leak due to GLOBAL
- Add overload to hasAny/hasAllTokens to accept
- Add a step to postinstall script
- Check credentials in the Web UI only
- Limit exception message length
- Fix requesting the structure of a dataset
Performance Improvement
30- Implement lazy columns replication
- A new physical layout for String columns
- Support read in order
- Bloom filter for JOIN queries
- Improved query performance by refactoring the order
- Bunch of micro-optimizations to speed up small
- Compress logs and profile events
- Improve the performance of case sensitive string
- Reduce memory allocation and memory copy
- Provides a logic regarding pushing down the disjunction JOIN predicates
- Fix or suffix by using the new
- Fix performance degradation caused
- Add new joined_block_split_single_row
- Improve the performance on a large number
- Improved performance of building text index
- Improve the performance on a large number
- Improve the performance of all queries
- Enable saving marks
- SELECT query with FINAL clause on.a ReplacingMergeTree
- Reduce the impact of not using fail
- Avoid full scan
- Improved performance of functions tokens, hasAllTokens, hasAnyTokens
- Inline AddedColumns::appendFromBlock for slightly better JOIN performance
- Client autocompletion is faster and more consistent
- Add new dictionary_block_frontcoding_compression text index parameter
- Squash data from all threads before inserting
- Add setting temporary_files_buffer_size to control size
- Add support of direct reading from text
- Queries with tables from Data Lakes catalogs
- Internal heuristic for tuning of the background