v.25.7
Backward Incompatible Change
6Bug Fix (user-visible misbehavior in an official stable release)
98- Fix the wrong default value
- Fix inconsistent
- Fix inconsistent
- Fix inconsistent
- Fix inconsistent
- Use the proper error code
- Fix logical error
- Reduce rows to ensure the correctness
- For queries with combination of ORDER
- Fix excessive granule skipping
- Fix logical error with operator and Join
- Fix a crash
- Fix incorrect behavior of to_utc_timestamp and from_utc_timestamp
- For some queries executed
- Fix logical error during materialize projection
- Fix incorrect TTL recalculation
- Fix Parquet bloom filter
- Fix possible crash
- Add backquotes to database and table names
- Fix IN execution with transform_null_in=1 with null
- Don't validate experimental/suspicious types
- Fix "Context has expired" during merges
- Fix monotonicity of the cast function
- Fix the issue where required columns are
- Versions, the server returned excessive content
- Previously, MongoDB table engine definitions could include
- Fix possible crash
- Fix filter analysis
- Fix LOGICAL\_ERROR and following crash
- Fix S3 table function argument validation
- Fix data races
- Fix DatabaseReplicated::getClusterImpl
- Fixing copy-paste error in arraySimilarity, disallowing
- Fix the Not found column error
- Fix bug in glue catalog
- Fix performance degradation
- When passing settings over uri the last
- Fix "Context has expired"
- Fix possible deadlock
- Fix overflow
- Fix a bug
- Fix possible data-race between suggestion thread and
- Now ClickHouse can read iceberg tables
- Fix the validation of async metrics settings
- Fix logical error
- Add expiration to AWS ECS token so
- Fix a bug
- Fix data-races
- Fix disabling boundary alignment
- Fix the crash if key-value storage is
- Fix hiding named collection values
- Fix a possible crash
- Fix cases where parsing of Time could
- Allow setting threadpool_writer_pool_size
- Fix LOGICAL_ERROR during row policy expression analysis
- Fix incorrect usage of parent metadata
- Support input strings of type "FixedString(N)"
- Update the code to fallback to read
- Fix deserialization of groupArraySample/groupArrayLast
- Fix backup of an empty Memory table
- Fix exception safety
- Keep track of the number of async
- Fix data races
- Setting use_skip_indexes_if_final_exact_mode optimization
- Set salt for auth data
- When using a non-caching Database implementation,
- Fix filter modification
- Fix LOGICAL\_ERROR
- Fix incorrect output of function
- Fix performance degradation with the enabled analyzer
- Fix misleading error message
- Do not check for cyclic dependencies
- Fix issue with implicit reading of negative
- Do not use unrelated parts of
- Fix the regression
- Fix crash
- Fix crash
- Fix possible crash
- Fix LOGICAL\_ERROR
- Fix no_sign_request
- Fix a crash that may happen
- Fix TOO_DEEP_SUBQUERIES exception
- Fix incorrect behavior
- Do not share async\_read\_counters between queries
- Disable parallel replicas when a subquery contains
- Resolve minor integer overflow
- Fix a bug
- Disable bounds-based file pruning
- Fix possible file cache not
- Update total watch count correctly when ephemeral
- Fix incorrect memory around max\_untracked\_memory
- INSERT SELECT with UNION ALL could lead
- Allow zero value
- Fix endless loop
- Fix IndexUncompressedCacheBytes/IndexUncompressedCacheCells/IndexMarkCacheBytes/IndexMarkCacheFiles metrics
- Fix possible abort
- Introduce backward compatibility setting
- Fix deadlock on shutdown due to recursive
Build/Testing/Packaging Improvement
10- Build a minimal C library
- Add a check for Nix submodule inputs
- Fix a list of problems that can
- Compile SymbolIndex on Mac and FreeBSD
- Bumped Azure SDK to v1.15.0.
- Add storage module from google-cloud-cpp to build
- Change Dockerfile.ubuntu for clickhouse-server
- Fix uploading builds to curl clickhouse
- Adding busybox binary and install tools
- Added support for the CLICKHOUSE_HOST environment variable
Experimental Feature
9- Add functions searchAny and searchAll which are
- Text index now supports the new split
- Changed the default index granularity value
- 256-bit bitmap stores the outgoing labels
- Enable zstd compression
- Promote vector similarity index to beta
- Remove experimental send_metadata logic related to experimental
- Integrate StorageKafka2 to system.kafka_consumers
- Estimate complex CNF/DNF, for example,
New Feature
17- Add support for lightweight updates for MergeTree-family
- Support complex types
- Introduce support
- Read Iceberg data files by field ids
- Now clickhouse supports compressed metadata.json files
- Support TimestampTZ
- Add AI-powered SQL generation to ClickHouse client
- Add a function to write Geo types
- Introduced two new access types: READ and
- NumericIndexedVector: new vector data-structure backed
- Workload setting max_waiting_queries is now supported
- Financial Functions
- Add geospatial
- Support _part_granule_offset virtual column
- Added SQL functions colorSRGBToOkLCH and colorOkLCHToSRGB
- Allow parameters in CREATE USER queries
- System.formats table now contains extended information about
Improvement
64- Color parenthesis in multiple colors
- Highlight metacharacters in LIKE/REGEXP patterns as you
- Highlighting in clickhouse-format and
- Now plain_rewritable disks are allowed as disks
- Allow backups
- Setting allow_experimental_join_condition marked as obsolete, because it
- Add pressure metrics to ClickHouse async metrics
- Added metrics MarkCacheEvictedBytes, MarkCacheEvictedMarks, MarkCacheEvictedFiles
- Support writing Parquet enum as byte array
- Support partition pruning
- Preserve element names when deriving supertypes
- Avoid depending on previous committed offset
- Add clickhouse-keeper-utils, a new command-line tool
- Total and per-user network throttlers are never
- Support writing geoparquets as output
- Forbid to start RENAME COLUMN alter mutation
- Header Connection is send
- Tune TCP servers queue
- Add ability to reload max_local_read_bandwidth_for_server and max_local_write_bandwidth_for_server
- Add support for clearing all warnings
- Fix partition pruning with data lake cluster
- Fix reading partitioned data
- Function reinterpret function now supports conversion
- Now database Datalake throw more convenient exception
- Improve CROSS JOIN
- Allow write/read map columns as Array
- List the licenses of Rust crates
- Macros like {uuid} can now be used
- Keeper improvement: move changelog files between disk
- Add new config keeper_server
- Add a new server
- Refactor dynamic resize feature of filesystem cache
- Clickhouse-server without a configuration file will also
- We get the StorageID, and without taking
- Add table UUIDs into DatabaseCatalog
- Prevent user from using nan and inf
- Do not omit zero values
- Support specific permissions
- Allow RENAME COLUMN or DROP COLUMN involving
- Improve the precision of conversion from Decimal
- Scrollbars in the Web UI will look
- Allow using the Web UI by providing
- Add support for specifying extra Keeper ACL
- Now mutations snapshot will be built
- Adds ProfileEvent when Keeper rejects a write
- Add columns commit_time, commit_id to system
- Cases, we need to have multiple dimensions
- Consolidate unknown settings warnings
- Clickhouse client now reports the local port
- Slightly better error handling in AsynchronousMetrics
- Shutdown SystemLogs after ordinary tables
- Add logs for S3Queue shutdown process
- Possibility to parse Time and Time64 as
- When distributed_ddl_output_mode='*_only_active', don't wait
- Do not output too long descriptions
- Add ability to parse part's prefix and
- Unify parameter names in ODBC and JDBC
- When the storage is shutting down, getStatus
- Add process resource metrics
- Enable create_if_not_exists, check_not_exists, remove_recursive feature flags
- Shutdown S3(Azure/etc)Queue streaming before shutting down any
- Enable Date/Date32
- Made exception messages for certain situations
- Introduce a configuration option
Performance Improvement
26- Introduce async logging
- Parallel distributed INSERT SELECT is enabled by default
- When the aggregation query contains only
- Performance of HashJoin optimised
- Trivial optimization for -If combinator
- Vector search queries using a vector similarity
- Respect merge_tree_min_{rows,bytes}_for_seek in filterPartsByQueryConditionCache
- Make the pipeline after the TOTALS step
- Fix filter by key
- Add new setting min_joined_block_size_rows
- ATTACH PARTITION no longer leads
- Optimize the generated plan
- Read only required columns
- Speedup comparisons of query trees during
- Add alignment in the Counter of ProfileEvents
- Optimizations for null_map and JoinMask
- Avoid calculating a hash on each access
- Don't pre-allocate memory for result columns beforehand
- Minimize memory copy in port headers during
- Improve the startup of clickhouse-keeper when it
- Reduce lock contention with high concurrent load
- Improved performance of the ProtobufSingle input format
- Improve the performance of pipeline building that
- Optimize MergeTreeReadersChain::getSampleBlock that speeds up short queries
- Speedup tables listing in data catalogs
- Introduce jitter