简介

ClickHouse是俄罗斯最大的搜素引擎Yandex于2016年开源的列式数据库管理系统,使用C++ 语言编写, 主要应用于OLAP场景。

使用理由
在大数据量的情况下,能以很低的延迟返回查询结果。

笔者注: 在单机亿级数据量的场景下可以达到毫秒级的查询性能,单机能处理百亿的数据量, 聚合、计数、求和等统计操作的性能是MySQL的100倍。

主要特点

  1. 列式储存
  2. 数据压缩
  3. 支持大部分标准sql
  4. 支持分区(类似分表)
  5. 使用稀疏索引作为索引的主要实现
  6. 支持分布式
  7. 可以在牺牲精度的前提下,快速的返回一个近似结果

缺点

  1. 没有完整的事务支持。
  2. 缺少高频率,低延迟的修改或删除已存在数据的能力。
  3. 稀疏索引使得ClickHouse不适合通过其键检索单行的点查询
  4. 没有自增类型

来源:ClickHouse使用总结_推迟享受。的博客-CSDN博客

ClickHouse集群架构

ClickHouse分片

定义
ClickHouse中的每个服务节点都可称为一个shard(分片)
Distributed表引擎自身不存储任何数据,它能够作为分布式表的一层透明代理,在集群内部自动开展数据的写入、分发、查询、路由等工作。

zk作用

Zookeeper 安装3节点,版本要求在 ZooKeeper 3.4.5 或更高版本.

你可以配置任何现有的 ZooKeeper 集群,系统会使用里面的目录来存取元数据(该目录在创建可复制表时指定)。

如果配置文件中没有设置 ZooKeeper ,则无法创建复制表,并且任何现有的复制表都将变为只读。

在 ClickHouse 中,ZooKeeper 不参与任何实质性的数据传输。ZooKeeper 在 ClickHouse 中主要用在副本表数据的同步(ReplicatedMergeTree引擎)以及分布式表(Distributed)的操作上。因此需要确保zk集群的硬盘空间和高可用

查看zk配置是否生效

如果 Zookeeper 配置成功,可以查看 system.zookeeper 这张表,如果存在则表示配置成功,不存在请检查各节点关于 Zookeeper 的配置,同时也可以利用该表查看 Zookeeper 中的元数据信息

select * from system.zookeeper where path=’/’;

docker搭建ck集群

系统准备

#关闭firewall 与 selinux 和 swap
systemctl disabled firewalled && systemctl stop firewalled
setenforce 0
sed -ri ‘s#(^SELINUX=).*#\1disabled#g’ /etc/selinux/config
swapoff -a
sed -ri ‘s/.*swap.*/#&/’ /etc/fstab

搭建zk集群

Zookeeper 安装3节点,版本要求在 ZooKeeper 3.4.5 或更高版本.

在每个服务器上你想存放的位置,新建一个文件夹来存放zk的配置信息,这里是 /usr/soft/zookeeper/ ,在每个服务器上依次运行以下启动命令:

server01执行:

暴露出的使用端口是2181端口,集群通信端口是2888和3888端口。

docker run -d –name zookeeper_node –restart always \
-v /usr/soft/zookeeper/data:/data \
-v /usr/soft/zookeeper/datalog:/datalog \
-v /usr/soft/zookeeper/logs:/logs \
-v /usr/soft/zookeeper/conf:/conf \
–network host \
-e ZOO_MY_ID=1 zookeeper

server02执行:

docker run -d –name zookeeper_node –restart always \
-v /usr/soft/zookeeper/data:/data \
-v /usr/soft/zookeeper/datalog:/datalog \
-v /usr/soft/zookeeper/logs:/logs \
-v /usr/soft/zookeeper/conf:/conf \
–network host \
-e ZOO_MY_ID=2 zookeeper

server03执行:

docker run -d –name zookeeper_node –restart always \
-v /usr/soft/zookeeper/data:/data \
-v /usr/soft/zookeeper/datalog:/datalog \
-v /usr/soft/zookeeper/logs:/logs \
-v /usr/soft/zookeeper/conf:/conf \
–network host \
-e ZOO_MY_ID=3 zookeeper

创建目录

创建ClickHouse的工作目录,每个节点都要创建:

mkdir -p /data/clickhouse/data
mkdir -p /data/clickhouse/logs
mkdir -p /data/clickhouse/conf

配置文件

1.config.xml

在每个节点 /data/clickhouse/conf 目录下放入文件:config.xml,注意检查端口,避免冲突。

<!--NOTE: User and query level settings are set up in "users.xml" file.If you have accidentally specified user-level settings here, server won't start.You can either move the settings to the right place inside "users.xml" file or add 1 here.-->trace/var/log/clickhouse-server/clickhouse-server.log/var/log/clickhouse-server/clickhouse-server.err.log1000M10<!-- 1 --> <!--none--><!--ContextAccess (default)noneDatabaseOrdinary (test)none-->81231000090049005<!-- 8443 --><!-- 9440 --><!-- 9011 -->9009<!-- 9010 --><!--example.clickhouse.com--><!--interserver--><!-- :: -->0.0.0.0<!--::1127.0.0.1--><!-- 0 --><!-- 0 --><!-- 4096 -->40963<!-- 9100 -->false/path/to/ssl_cert_file/path/to/ssl_key_filefalse/path/to/ssl_ca_cert_filenone0-1-1false <!-- /etc/clickhouse-server/server.crt/etc/clickhouse-server/server.key --><!-- dhparams are optional. You can delete the  element. To generate dhparams, use the following command:openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096 Only file format with BEGIN DH PARAMETERS is supported.--><!-- /etc/clickhouse-server/dhparam.pem-->nonetruetruesslv2,sslv3true truetruesslv2,sslv3true<!-- Use for self-signed: none --><!-- Use for self-signed: AcceptCertificateHandler -->RejectCertificateHandler<!--<![CDATA[]]>-->100010000<!-- 10 -->0.941943040<!-- 262144 -->85899345925368709120100013421772810000/var/lib/clickhouse/<!--0/data/0s3http://path/to/endpointyour_access_key_idyour_secret_access_keyazure_blob_storagehttp://account.blob.core.windows.netcontaineraccountpass123/var/lib/clickhouse/disks/blob_storage_disk/true/var/lib/clickhouse/disks/blob_storage_disk/cache/false
defaultdatas3blob_storage_disktruefalseround_robin
0.2-->/var/lib/clickhouse/tmp/<!-- 0 --><!-- 0 -->`<!-- Policy from the for the temporary files. If not set is used, otherwise is ignored. Notes: - move_factoris ignored - keep_free_space_bytesis ignored - max_data_part_size_bytes is ignored - you must have exactly one volume in that policy--><!-- tmp -->/var/lib/clickhouse/user_files/<!-- List LDAP servers with their connection parameters here to later 1) use them as authenticators for dedicated local users,who have 'ldap' authentication mechanism specified instead of 'password', or to 2) use them as remote user directories. Parameters:host - LDAP server hostname or IP, this parameter is mandatory and cannot be empty.port - LDAP server port, default is 636 if enable_tls is set to true, 389 otherwise.bind_dn - template used to construct the DN to bind to.The resulting DN will be constructed by replacing all '{user_name}' substrings of the template with the actual user name during each authentication attempt.user_dn_detection - section with LDAP search parameters for detecting the actual user DN of the bound user.This is mainly used in search filters for further role mapping when the server is Active Directory. The resulting user DN will be used when replacing '{user_dn}' substrings wherever they are allowed. By default, user DN is set equal to bind DN, but once search is performed, it will be updated with to the actual detected user DN value.base_dn - template used to construct the base DN for the LDAP search.The resulting DN will be constructed by replacing all '{user_name}' and '{bind_dn}' substrings of the template with the actual user name and bind DN during the LDAP search.scope - scope of the LDAP search.Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default).search_filter - template used to construct the search filter for the LDAP search.The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', and '{base_dn}' substrings of the template with the actual user name, bind DN, and base DN during the LDAP search.Note, that the special characters must be escaped properly in XML.verification_cooldown - a period of time, in seconds, after a successful bind attempt, during which a user will be assumed to be successfully authenticated for all consecutive requests without contacting the LDAP server.Specify 0 (the default) to disable caching and force contacting the LDAP server for each authentication request.enable_tls - flag to trigger use of secure connection to the LDAP server.Specify 'no' for plain text (ldap://) protocol (not recommended).Specify 'yes' for LDAP over SSL/TLS (ldaps://) protocol (recommended, the default).Specify 'starttls' for legacy StartTLS protocol (plain text (ldap://) protocol, upgraded to TLS).tls_minimum_protocol_version - the minimum protocol version of SSL/TLS.Accepted values are: 'ssl2', 'ssl3', 'tls1.0', 'tls1.1', 'tls1.2' (the default).tls_require_cert - SSL/TLS peer certificate verification behavior.Accepted values are: 'never', 'allow', 'try', 'demand' (the default).tls_cert_file - path to certificate file.tls_key_file - path to certificate key file.tls_ca_cert_file - path to CA certificate file.tls_ca_cert_dir - path to the directory containing CA certificates.tls_cipher_suite - allowed cipher suite (in OpenSSL notation). Example:localhost636uid={user_name},ou=users,dc=example,dc=com300yestls1.2demand/path/to/tls_cert_file/path/to/tls_key_file/path/to/tls_ca_cert_file/path/to/tls_ca_cert_dirECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:AES256-GCM-SHA384 Example (typical Active Directory with configured user DN detection for further role mapping):localhost389EXAMPLE\{user_name}CN=Users,DC=example,DC=com(&(objectClass=user)(sAMAccountName={user_name}))no--><!-- To enable Kerberos authentication support for HTTP requests (GSS-SPNEGO), for those users who are explicitly configuredto authenticate via Kerberos, define a single 'kerberos' section here. Parameters:principal - canonical service principal name, that will be acquired and used when accepting security contexts.This parameter is optional, if omitted, the default principal will be used.This parameter cannot be specified together with 'realm' parameter.realm - a realm, that will be used to restrict authentication to only those requests whose initiator's realm matches it.This parameter is optional, if omitted, no additional filtering by realm will be applied.This parameter cannot be specified together with 'principal' parameter. Example: Example:HTTP/clickhouse.example.com@EXAMPLE.COM Example:EXAMPLE.COM-->users.xml/var/lib/clickhouse/access/<!-- To add an LDAP server as a remote user directory of users that are not defined locally, define a single 'ldap' sectionwith the following parameters:server - one of LDAP server names defined in 'ldap_servers' config section above.This parameter is mandatory and cannot be empty.roles - section with a list of locally defined roles that will be assigned to each user retrieved from the LDAP server.If no roles are specified here or assigned during role mapping (below), user will not be able to perform any actions after authentication.role_mapping - section with LDAP search parameters and mapping rules.When a user authenticates, while still bound to LDAP, an LDAP search is performed using search_filter and the name of the logged in user. For each entry found during that search, the value of the specified attribute is extracted. For each attribute value that has the specified prefix, the prefix is removed, and the rest of the value becomes the name of a local role defined in ClickHouse, which is expected to be created beforehand by CREATE ROLE command.There can be multiple 'role_mapping' sections defined inside the same 'ldap' section. All of them will be applied.base_dn - template used to construct the base DN for the LDAP search.The resulting DN will be constructed by replacing all '{user_name}', '{bind_dn}', and '{user_dn}' substrings of the template with the actual user name, bind DN, and user DN during each LDAP search.scope - scope of the LDAP search.Accepted values are: 'base', 'one_level', 'children', 'subtree' (the default).search_filter - template used to construct the search filter for the LDAP search.The resulting filter will be constructed by replacing all '{user_name}', '{bind_dn}', '{user_dn}', and '{base_dn}' substrings of the template with the actual user name, bind DN, user DN, and base DN during each LDAP search.Note, that the special characters must be escaped properly in XML.attribute - attribute name whose values will be returned by the LDAP search. 'cn', by default.prefix - prefix, that will be expected to be in front of each string in the original list of strings returned by the LDAP search. Prefix will be removed from the original strings and resulting strings will be treated as local role names. Empty, by default. Example:my_ldap_serverou=groups,dc=example,dc=comsubtree(&(objectClass=groupOfNames)(member={bind_dn}))cnclickhouse_ Example (typical Active Directory with role mapping that relies on the detected user DN):my_ad_serverCN=Users,DC=example,DC=comCNsubtree(&(objectClass=group)(member={user_dn}))clickhouse_-->falsefalse<!-- By default, for backward compatibility "SELECT * FROM system." doesn't require any grants and can be executed by any user. You can change this behaviour by setting this to true. If it's set to true then this query requires "GRANT SELECT ON system.
" just like as for non-system tables. Exceptions: a few system tables ("tables", "columns", "databases", and some constant tables like "one", "contributors") are still accessible for everyone; and if there is a SHOW privilege (e.g. "SHOW USERS") granted the corresponding system table (i.e. "system.users") will be accessible. -->false<!-- By default, for backward compatibility "SELECT * FROM information_schema.
" doesn't require any grants and can be executed by any user. You can change this behaviour by setting this to true. If it's set to true then this query requires "GRANT SELECT ON information_schema.
" just like as for ordinary tables. -->falsedefault<!-- default --><!-- default -->default<!-- UTC --><!-- 022 -->truefalse<![CDATA[ Uncomment below in order to use JDBC table engine and function. To install and run JDBC bridge in background: * [Debian/Ubuntu] export MVN_URL=https://repo1.maven.org/maven2/com/clickhouse/clickhouse-jdbc-bridge/ export PKG_VER=$(curl -sL $MVN_URL/maven-metadata.xml | grep '' | sed -e 's|.*>\(.*\)<.*|\1|') wget https://github.com/ClickHouse/clickhouse-jdbc-bridge/releases/download/v$PKG_VER/clickhouse-jdbc-bridge_$PKG_VER-1_all.deb apt install --no-install-recommends -f ./clickhouse-jdbc-bridge_$PKG_VER-1_all.deb clickhouse-jdbc-bridge & * [CentOS/RHEL] export MVN_URL=https://repo1.maven.org/maven2/com/clickhouse/clickhouse-jdbc-bridge/ export PKG_VER=$(curl -sL $MVN_URL/maven-metadata.xml | grep '' | sed -e 's|.*>\(.*\)<!--127.0.0.19019-->/etc/clickhouse-server/config.d/metrika.xml3600360060<!--localhost420000.160one_mintruetruetruefalsetruelocalhost420000.11one_sectruetruefalsefalse--><!--/metrics9363truetruetruetrue-->system
query_log
toYYYYMM(event_date)<!--Table TTL specification: https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree/#mergetree-table-ttlExample:event_date + INTERVAL 1 WEEKevent_date + INTERVAL 7 DAY DELETEevent_date + INTERVAL 2 WEEK TO DISK 'bbb'event_date + INTERVAL 30 DAY DELETE--><!-- Instead of partition_by, you can provide full engine expression (starting with ENGINE = ) with parameters, Example: ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024-->7500systemtrace_log
toYYYYMM(event_date)7500systemquery_thread_log
toYYYYMM(event_date)7500systemquery_views_log
toYYYYMM(event_date)7500systempart_log
toYYYYMM(event_date)7500<!-- Uncomment to write text log into table. Text log contains all information from usual server log but stores it in structured and efficient way. The level of the messages that goes to the table can be limited (), if not specified all messages will go to the table.systemtext_log
7500-->systemmetric_log
75001000systemasynchronous_metric_log
7000<!--The default table creation code is insufficient, this specis a workaround. There is no 'event_time' for this log, but two times,start and finish. It is sorted by finish time, to avoid insertingdata too far away in the past (probably we can sometimes insert a spanthat is seconds earlier than the last span in the table, due to a racebetween several spans inserted in parallel). This gives the spans aglobal order that we can use to e.g. retry insertion into some externalsystem.-->engine MergeTreepartition by toYYYYMM(finish_date)order by (finish_date, finish_time_us, trace_id)systemopentelemetry_span_log
7500systemcrash_log
1000<!-- systemsession_log
toYYYYMM(event_date)7500 -->systemprocessors_profile_log
toYYYYMM(event_date)7500<!-- /var/lib/clickhouse/top_level_domains/ --><!-- Custom TLD lists. Format: /path/to/file Changes will not be applied w/o server restart. Path to the list is under top_level_domains_path (see above).--><!--/path/to/public_suffix_list.dat-->*_dictionary.xml*_function.xml<!--100000000000.01 zstd-->/clickhouse/task_queue/ddl<!-- default --><!-- 1 --><!-- 604800 --><!-- 60 --><!-- 1000 --><!--5--><!-- Protection from accidental DROP. If size of a MergeTree table is greater than max_table_size_to_drop (in bytes) than table could not be dropped with any DROP query. If you want do delete one table and don't want to change clickhouse-server config, you could create special file /flags/force_drop_table and make DROP once. By default max_table_size_to_drop is 50GB; max_table_size_to_drop=0 allows to DROP any tables. The same for max_partition_size_to_drop. Uncomment to disable protection.--><!-- 0 --><!-- 0 -->click_costany036008640060max0603600300864003600<!-- Directory in containing schema files for various input formats. The directory will be created if it doesn't exist.-->/var/lib/clickhouse/format_schemas/hide encrypt/decrypt arguments((?:aes_)?(?:encrypt|decrypt)(?:_mysql)?)\s*\(\s*(?:'(?:\\'|.)+'|.*?)\s*\)\1(???)<!-- Uncomment to use custom http handlers.rules are checked from top to bottom, first match runs the handlerurl - to match request URL, you can use 'regex:' prefix to use regex match(optional)methods - to match request method, you can use commas to separate multiple method matches(optional)headers - to match request headers, match each child element(child element name is header name), you can use 'regex:' prefix to use regex match(optional)handler is request handlertype - supported types: static, dynamic_query_handler, predefined_query_handlerquery - use with predefined_query_handler type, executes query when the handler is calledquery_param_name - use with dynamic_query_handler type, extracts and executes the value corresponding to the value in HTTP request paramsstatus - use with static type, response status codecontent_type - use with static type, response content-typeresponse_content - use with static type, Response content sent to client, when using the prefix 'file://' or 'config://', find the content from the file or configuration send to client./POST,GETno-cachedynamic_query_handlerquery/predefined_queryPOST,GETpredefined_query_handlerSELECT * FROM system.settingsstatic200text/plain; charset=UTF-8config://http_server_default_response--><!-- Changing to true allows sending crash reports to -->falsefalsehttps://6f33034cfe684dd7a3ab9875e57b1c8d@o388870.ingest.sentry.io/5226277<!-- 1 --><!--82TABLE82
-->

2.metrika.xml

在每个节点 /data/clickhouse/conf 目录下放入文件:metrika.xml,注意每个节点的副本名称都需要修改,不能冲突。本次我们搭建的是一个2节点,无副本的集群。

注意 所有节点/etc/hosts要把需要的zk和ck主机名配置进去

 truebigdata110000clickhouseclickhousetruebigdata210000clickhouseclickhousebigdata12181bigdata22181bigdata32181singler1 

分片副本组合使用
在实际生产环境中,为了能够达到高可用,会对每个分片进行备份,即对每个节点的本地表进行备份,将分片和副本组合使用,组合的方式采用环形复制拓扑,充分利用两者的优势。如下图所示为一个 3 分片 + 1 副本的分布式表。

宏变量配置样例

0103worker_010201worker_020302worker_03

3.users.xml

在每个节点 /data/clickhouse/conf 目录下放入文件:users.xml,用于资源配置。

100000000001000000000030050000000001<!-- See also the files in users.d directory where the password can be overridden. Password could be specified in plaintext or in SHA256 (in hex format). If you want to specify password in plaintext (not recommended), place it in 'password' element. Example: qwerty. Password could be empty. If you want to specify SHA256, place it in 'password_sha256_hex' element. Example: 65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5 Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019). If you want to specify double SHA1, place it in 'password_double_sha1_hex' element. Example: e395796d6546b1b65db9d665cd43f0e858dd4303 If you want to specify a previously defined LDAP server (see 'ldap_servers' in the main config) for authentication,place its name in 'server' element inside 'ldap' element. Example: my_ldap_server If you want to authenticate the user via Kerberos (assuming Kerberos is enabled, see 'kerberos' in the main config),place 'kerberos' element instead of 'password' (and similar) elements. The name part of the canonical principal name of the initiator must match the user name for authentication to succeed. You can also place 'realm' element inside 'kerberos' element to further restrict authentication to only those requestswhose initiator's realm matches it. Example:  Example: EXAMPLE.COM How to generate decent password: Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-' In first line will be password and in second - corresponding SHA256. How to generate double SHA1: Execute: PASSWORD=$(base64 <!-- List of networks with open access. To open access from everywhere, specify:::/0 To open access only from localhost, specify:::1127.0.0.1 Each element of list has one of the following forms:  IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0 2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::.  Hostname. Example: server01.clickhouse.com. To check access, DNS query is performed, and all received addresses compared to peer address.  Regular expression for host names. Example, ^server\d\d-\d\d-\d\.clickhouse\.com$ To check access, DNS PTR query is performed for peer address and then regexp is applied. Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address. Strongly recommended that regexp is ends with $ All results of DNS requests are cached till server restart.-->::/0defaultdefault<!-- 1 -->360000000

也可以在命令行中创建用户:

create database if not exists tutorial;
CREATE ROLE dba;

GRANT all ON *.* TO dba;
CREATE USER dba_u@’%’ IDENTIFIED WITH sha256_password BY ‘密码’;
GRANT dba TO dba_u;
GRANT all ON tutorial.* TO ‘dba_u’;

启动服务

docker-compose -f docker-compose.yml up -dclickhouse

version: '2'services:clickhouse:container_name: clickhouseimage: clickhouse/clickhouse-server:22.9.7.34restart: alwaysprivileged: trueenvironment:- TZ=Asia/Shanghai- CLICKHOUSE_USER=clickhouse- CLICKHOUSE_PASSWORD=clickhouse- CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1volumes:- /data/clickhouse/data:/var/lib/clickhouse- /data/clickhouse/logs:/var/log/clickhouse-server- /data/clickhouse/conf/config.xml:/etc/clickhouse-server/config.xml- /data/clickhouse/conf/users.xml:/etc/clickhouse-server/users.xml- /data/clickhouse/conf/metrika.xml:/etc/clickhouse-server/config.d/metrika.xml- /etc/hosts:/etc/hostshostname: bigdata1ulimits:nofile:soft: 262144hard: 262144network_mode: 'host'

来源:

https://www.cnblogs.com/chuijingjing/p/17051485.html

数据库 – ClickHouse安装及集群搭建 – 个人文章 – SegmentFault 思否

https://mp.weixin.qq.com/s/L0XIstgsrDiAGjhhhqoQrw

Clickhouse Docker集群部署

Docker快速搭建Clickhouse集群(3分片3副本) – 爱码网

验证:

创建本地表

MergeTree,这个引擎本身不具备同步副本的功能,如果指定的是ReplicaMergeTree,会同步到对应的replica上面去。一般在实际应用中,创建分布式表指定的都是Replica的表。

分布式表本身不存储数据,数据存储其实还是由本地表t_cluster完成的。这个dist_t_cluster仅仅做一个代理的作用。

如果在任意节点创建表以后,其他节点都能同步到表结构,说明集群生效。

CREATE TABLE default.test ON CLUSTER clickhouse_cluster
(
name String DEFAULT ‘lemonNan’ COMMENT ‘姓名’,
age int DEFAULT 18 COMMENT ‘年龄’,
gongzhonghao String DEFAULT ‘lemonCode’ COMMENT ‘公众号’,
my_time DateTime64(3, ‘UTC’) COMMENT ‘时间’
) ENGINE = ReplacingMergeTree()
PARTITION BY toYYYYMM(my_time)
ORDER BY my_time

CREATE TABLE t_cluster ON CLUSTER clickhouse_cluster (
id Int16,
name String,
birth Date
)ENGINE = MergeTree()
PARTITION BY toYYYYMM(birth)

创建分布式表

CREATE TABLE default.dist_t_cluster ON CLUSTER clickhouse_cluster as t_cluster engine = Distributed(clickhouse_cluster, default, t_cluster, rand());

插入测试数据

多插入几条,在任意节点上查看分布式表,都能够看到数据。

insert into dist_t_cluster values(1, ‘aaa’, ‘2021-02-01’), (2, ‘bbb’, ‘2021-02-02’);

客户端工具

大数据ClickHouse(二十):ClickHouse 可视化工具操作 – 知乎