v.19.4
Bug Fixes
56- This Release Also Contains All Bug Fixes from 19.3 and 19.1.
- Fixed Bug in Data Skipping Indices: Order of Granules After INSERT Was Incorrect
- Fixed Set Index for Nullable and Lowcardinality Columns
- Correctly Set Update_time on Full Executable Dictionary Update
- Fix Broken Progress Bar in 19.3
- Fixed Inconsistent Values of Memorytracker When Memory Region Was Shrinked, in Certain Cases
- Fixed Undefined Behaviour in Threadpool
- Fixed a Very Rare Crash with the Message Mutex Lock Failed: Invalid Argument That Could Happen When a Mergetree Table Was Dropped Concurrently with a SELECT
- ODBC Driver Compatibility with Lowcardinality Data Type
- Freebsd: Fixup for Aiocontextpool: Found Io_event with Unknown Id 0 Error
- System.part_log Table Was Created Regardless to Configuration
- Fix Undefined Behaviour in Dictisin Function for Cache Dictionaries
- Fixed a Deadlock When a SELECT Query Locks the Same Table Multiple Times (e.g
- Disable Compile_expressions by Default Until We Get Own Llvm Contrib and Can Test It with Clang and Asan
- Prevent Std::terminate When Invalidate_query for Clickhouse External Dictionary Source Has Returned Wrong Resultset (empty or More Than One Row or More Than One Column)
- Avoid Deadlock When the Invalidate_query for a Dictionary with Clickhouse Source Was Involving System.dictionaries Table or Dictionaries Database (rare Case)
- Fixes for CROSS JOIN with Empty WHERE
- Fixed Segfault in Function "replicate" When Constant Argument Is Passed
- Fix Lambda Function with Predicate Optimizer
- Multiple Joins Multiple Fixes
- Fixed Remote Queries Which Contain Both LIMIT BY and LIMIT
- Fixed Reading from Array(lowcardinality) Column in Rare Case When Column Contained a Long Sequence of Empty Arrays
- Fix Crash in Full/right JOIN When We Joining on Nullable Vs Not Nullable
- Fix Segmentation Fault in Clickhouse-copier
- Avoid Std::terminate in Case of Memory Allocation Failure
- Fixes Capnproto Reading from Buffer
- Fix Error Unknown Log Entry Type: 0 After OPTIMIZE TABLE FINAL Query
- Wrong Arguments to Hasany or Hasall Functions May Lead to Segfault
- Deadlock May Happen While Executing DROP DATABASE Dictionary Query
- Fix Undefined Behavior in Median and Quantile Functions
- Fix Compression Level Detection When Network_compression_method in Lowercase
- Fixed Ignorance of <timezone>utc</timezone> Setting
- Fix Histogram Function Behaviour with Distributed Tables
- Fixed Tsan Report Destroy of a Locked Mutex
- Fixed Tsan Report on Shutdown Due to Race Condition in System Logs Usage
- Fix Recheck Parts in Replicatedmergetreealterthread in Case of Error
- Arithmetic Operations on Intermediate Aggregate Function States Were Not Working for Constant Arguments (such as Subquery Results)
- Always Backquote Column Names in Metadata
- Fix Crash in ALTER ..
- Fix Segfault in JOIN ON with Enabled Enable_optimize_predicate_expression
- Fix Bug with Adding an Extraneous Row After Consuming a Protobuf Message from Kafka
- Fix Segmentation Fault in Clickhouse-copier
- Fixed Race Condition in SELECT from System.tables If the Table Is Renamed or Altered Concurrently
- Fixed Data Race When Fetching Data Part That Is Already Obsolete
- Fixed Rare Data Race That Can Happen During RENAME Table of Mergetree Family
- Fixed Segmentation Fault in Function Arrayintersect
- Fixed Reading from Array(lowcardinality) Column in Rare Case When Column Contained a Long Sequence of Empty Arrays
- Fix No Message Received Exception While Fetching Parts Between Replicas
- Fixed Arrayintersect Function Wrong Result in Case of Several Repeated Values in Single Array
- Fix a Race Condition During Concurrent ALTER COLUMN Queries That Could Lead to a Server Crash
- Fix Parameter Deduction in ALTER MODIFY of Column CODEC When Column Type Is Not Specified
- Functions Cutquerystringandfragment() and Querystringandfragment() Now Works Correctly When URL Contains a Fragment and No Query
- Fix Rare Bug When Setting Min_bytes_to_use_direct_io Is Greater Than Zero, Which Occures When Thread Have to Seek Backward in Column File
- Fix Wrong Argument Types for Aggregate Functions with Lowcardinality Arguments
- Fix Function Toisoweek Result for Year 1970
- Fix Drop, TRUNCATE and OPTIMIZE Queries Duplication, When Executed on ON CLUSTER for Replicatedmergetree* Tables Family
Build/Testing/Packaging Improvement
11- Added Support for Clang-9 #4604 (alexey-milovidov)
- Fix Wrong __asm__ Instructions (again) #4621 (konstantin Podshumok)
- Add Ability to Specify Settings for Clickhouse-performance-test from Command Line
- Add Dictionaries Tests to Integration Tests
- Added Queries from the Benchmark on the Website to Automated Performance Tests
- Xxhash.h Does Not Exist in External Lz4 Because It Is an Implementation Detail and Its Symbols Are Namespaced with Xxh_namespace Macro
- Fixed a Case When Quantiletiming Aggregate Function Can Be Called with Negative or Floating Point Argument (this Fixes Fuzz Test with Undefined Behaviour Sanitizer)
- Spelling Error Correction
- Fix Compilation on Mac
- Build Fixes for Freebsd and Various Unusual Build Configurations
- Add a Way to Launch Clickhouse-server Image from a Custom User
Improvements
4New Features
9- Added Full Support for Protobuf Format (input and Output, Nested Data Structures)
- Added Bitmap Functions with Roaring Bitmaps
- Parquet Format Support
- N-gram Distance Was Added for Fuzzy String Comparison
- Combine Rules for Graphite Rollup from Dedicated Aggregation and Retention Patterns
- Added Max_execution_speed and Max_execution_speed_bytes to Limit Resource Usage
- Implemented Function Flatten
- Added Functions Arrayenumeratedenseranked and Arrayenumerateuniqranked (it's Like Arrayenumerateuniq But Allows to Fine Tune Array Depth to Look Inside Multidimensional Arrays)
- Multiple JOINS with Some Restrictions: No Asterisks, No Complex Aliases in On/where/group By/β¦ #4462 (artem Zuikov)
Performance Improvements
5- Improved Heuristics of "move to Prewhere" Optimization
- Use Proper Lookup Tables That Uses Hashtable's API for 8-bit and 16-bit Keys
- Improved Performance of String Comparison
- Cleanup Distributed DDL Queue in a Separate Thread So That It Does Not Slow Down the Main Loop That Processes Distributed DDL Tasks
- When Min_bytes_to_use_direct_io Is Set to 1, Not Every File Was Opened with O_direct Mode Because the Data Size to Read Was Sometimes Underestimated by the Size of One Compressed Block