NATS--NATS Streaming持久化
前言
在当前项目中需要引入一个消息队列系统,在此系统下将原有的一些操作进行异步处理。基于个人使用场景和对相关技术的掌握程度,在选择中间件时倾向于优先选择那些与我熟悉的编程语言对应的解决方案。同时考虑到个人长期在k8s等云原生架构领域积累的经验以及对项目的持续进化和发展趋势的深入理解,在这些因素的影响下做出了最终决定
消息队列的使用场景
当我们想知道这样做原因时
消息队列具有非阻塞特性和解耦功能。其主要作用在于减少请求响应时间并实现各组件间的解耦。因此,在设计应用架构时通常会将那些延迟较长且无需同步返回结果的操作(如数据计算、背景任务等)转移至消息队列中进行处理。由于消息队列机制的存在,在确保消息格式保持一致的前提下发送方与接收方之间无需建立实时通信关系也不需要互相干扰即可完成数据交换功能。
使用场景的话,举个例子:
当用户尝试在您的软件系统中进行注册时,在服务端接收到用户的注册请求后 系统将执行以下操作
- 对用户名等信息进行验证后发现无误时,在数据库中注册该用户。
- 通过邮箱进行注册将向您发送一封成功的邮件;而通过手机注册则会发送一条短信。
- 分析用户的个人信息以便将来能够更好地为您推荐志同道合的人,并且同时也能帮助我们根据您的喜好提供更有针对性的服务。
- 系统将向您发送一份包含操作指南的电子通知。
然而从用户的角度来看尽管注册功能实际上只需完成第一步但为了确保系统的稳定性我们可能需要设计一种分阶段处理流程的方式这样即使某一步骤出现故障也不会导致整个系统瘫痪
或者还有一种情况?同时吸引了大量用户使用你的软件,在高负荷情况下注册请求出现了问题?例如邮件接口承受压力或是由于信息分析所需的计算量导致cpu超载?这会导致即使用户的数据库记录能够迅速添加到数据库中(然而却卡在发送邮件或进行信息分析的过程中),从而导致请求响应时间急剧增加甚至出现超时现象?一般也是将这些操作放入消息队列(生产者消费者模型)中进行处理
通常情况下,在软件的正常功能开发阶段,并无必要特意去寻求消息队列的应用场景。相反地,在系统运行出现性能瓶颈的情况下,则应当通过检查业务逻辑来确定是否存在可进行异步处理的耗时操作项。一旦发现有此类操作项存在,则可考虑引入消息队列以解决相关问题;反之,在没有发现耗时操作项的情况下盲目应用消息队列不仅会增加维护与开发的成本而且可能难以带来显著的性能优化效果。
其实,总结一下消息队列的作用
- 削峰处理形象一点的话,则可比喻为蓄水池的操作模式。例如ELK日志收集系统中的Kafka机制,在日志流量高峰时段(即削峰处理期间),为了保证系统的安全稳定运行,则会在牺牲一定级别的实时响应速度的前提下进行操作。
- 同步系统的异构化处理。原先在一个同步操作中包含的多个步骤中,则有部分不影响主线业务发展的步骤(即非关键路径操作),我们可以考虑通过消息队列实现这些步骤的异步执行。例如电商行业的订单流程中完成一个订单后,在除了直接返回给客户购买成功的消息通知之外,在线支付成功后还需要向客户账户组发送扣费通知,并同步更新库存状态信息、发起物流派送等操作;同时还需要向某些用户组推送积分奖励等信息。
NATS Streaming 简介
基于NATS的数据流系统,并采用Go语言开发。其可执行文件名为nats-streaming-server。在与核心平台的无缝集成下实现了良好的扩展性和兼容性。采用Apache-2.0授权的开源软件发布。通过持续维护和推广该技术平台

特点
除了核心NATS平台的功能外,NATS Streaming还提供以下功能:
- 增强消息协议
基于谷歌协议缓冲区开发出自己的增强型消息格式。
这些消息通过二进制数据流在NATS核心平台传播由于无需调整基本协议。
NATS Streaming信息包含以下字段:
- 序列 - 一个全局顺序序列号为主题的通道
- 主题 - 是NATS Streaming 交付对象
- 答复内容 - 对应"reply-to"对应的对象内容
- 数据 - 真是数据内容
- 时间戳 - 接收的时间戳,单位是纳秒
- 重复发送 - 标志这条数据是否需要服务再次发送
- CRC32 - 一个循环冗余数据校验选项,在数据存储和数据通讯领域里,为了保证数据的正确性所采用的检错手段,这里使用的是 IEEE CRC32 算法
-
信息/事务的存续性
NATS Streaming提供了一种可调节的信息存续策略,并支持将数据存储在内存或文件中作为存续目标点。该存储组件通过提供一个公共接口使得开发者能够自定义实现对特定信息事务的数据存续功能。 -
消息至少两次及以上发布
NATS Streaming能够支持发布者与服务器之间的验证过程以及订阅者与服务器之间的验证流程。
其中的消息被存储于服务器内存或辅助存储设备(包括其他存储设备)中,并用于将需要重新接收的消息重新发送给相应的订阅者。
NATS Streaming提供了一个名为MaxPubAcksInFlight的消息连接选项。该配置项旨在有效管理发布者的消息发送行为。通过设定最大值限制指标,在任何时间段内未经确认的消息发送次数。当消息数量达到预先设定的最大值时,异步发送调用接口将被阻塞等待直至未确认的消息数量降至预定阈值以下。
每个订阅者的速率控制/上限NATS Streaming配置了一个名为MaxInFlight的参数来指定最大可接收的数据量。一旦超过该阈值时,NATS Streaming会阻止向订阅者发送消息;直到剩余未确认的数据量降至设定值以下为止。
- 以主题重发的历史数据
新用户的订阅可以在已经存储起来的订阅的主题频道指定起始位置的消息流中设置。通过设置这个选项,消息即可开始传递
1. 订阅的主题存储的最早的信息
2. 与当前订阅主题之前的最近存储的数据,这通常被认为是 "最后的值" 或 "初值" 对应的缓存
3. 一个以纳秒为基准的 日期/时间
4. 一个历史的起始位置相对当前服务的 日期/时间,例如:最后30秒
5. 一个特定的消息序列号
- 持久订阅
订阅还可以设置一个"持久化名字",在客户端重启时也不会受到影响。通过设置持久订阅后, 客户端的服务将能够追踪到最后确认消息的序列号与持久名字。当该客户端重新启动或重新订阅时, 采用相同的客户ID以及持久化名字, 相应的服务将能够从最早未被确认的消息中恢复状态。
docker 运行NATS Streaming
在运行开始之前,在之前的讲解中已经介绍了NATS Streaming相较于nats多了一个持久化的未来项。因此,在接下来的演示中我们将重点介绍这一点。
运行基于memory的持久化示例:
docker run -ti -p 4222:4222 -p 8222:8222 nats-streaming:0.12.0
你将会看到如下的输出:
[1] 2019/02/26 08:13:01.769734 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.12.0
[1] 2019/02/26 08:13:01.769811 [INF] STREAM: ServerID: arfYGWPtu7Cn8Ojcb1yko3
[1] 2019/02/26 08:13:01.769826 [INF] STREAM: Go version: go1.11.5
[1] 2019/02/26 08:13:01.770363 [INF] Starting nats-server version 1.4.1
[1] 2019/02/26 08:13:01.770398 [INF] Git commit [not set]
[4] 2019/02/26 08:13:01.770492 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2019/02/26 08:13:01.770555 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2019/02/26 08:13:01.770581 [INF] Server is ready
[1] 2019/02/26 08:13:01.799435 [INF] STREAM: Recovering the state...
[1] 2019/02/26 08:13:01.799461 [INF] STREAM: No recovered state
[1] 2019/02/26 08:13:02.052460 [INF] STREAM: Message store is MEMORY
[1] 2019/02/26 08:13:02.052552 [INF] STREAM: ---------- Store Limits ----------
[1] 2019/02/26 08:13:02.052574 [INF] STREAM: Channels: 100 *
[1] 2019/02/26 08:13:02.052586 [INF] STREAM: --------- Channels Limits --------
[1] 2019/02/26 08:13:02.052601 [INF] STREAM: Subscriptions: 1000 *
[1] 2019/02/26 08:13:02.052613 [INF] STREAM: Messages : 1000000 *
[1] 2019/02/26 08:13:02.052624 [INF] STREAM: Bytes : 976.56 MB *
[1] 2019/02/26 08:13:02.052635 [INF] STREAM: Age : unlimited *
[1] 2019/02/26 08:13:02.052649 [INF] STREAM: Inactivity : unlimited *
[1] 2019/02/26 08:13:02.052697 [INF] STREAM: ----------------------------------
可以看出默认的是基于内存的持久化。
运行基于file的持久化示例:
docker run -ti -v /Users/gao/test/mq:/datastore -p 4222:4222 -p 8222:8222 nats-streaming:0.12.0 -store file --dir /datastore -m 8222
你将会看到如下的输出:
[1] 2019/02/26 08:16:07.641972 [INF] STREAM: Starting nats-streaming-server[test-cluster] version 0.12.0
[1] 2019/02/26 08:16:07.642038 [INF] STREAM: ServerID: 9d4H6GAFPibpZv282KY9QM
[1] 2019/02/26 08:16:07.642099 [INF] STREAM: Go version: go1.11.5
[1] 2019/02/26 08:16:07.643733 [INF] Starting nats-server version 1.4.1
[1] 2019/02/26 08:16:07.643762 [INF] Git commit [not set]
[5] 2019/02/26 08:16:07.643894 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2019/02/26 08:16:07.643932 [INF] Server is ready
[1] 2019/02/26 08:16:07.672145 [INF] STREAM: Recovering the state...
[1] 2019/02/26 08:16:07.679327 [INF] STREAM: No recovered state
[1] 2019/02/26 08:16:07.933519 [INF] STREAM: Message store is FILE
[1] 2019/02/26 08:16:07.933570 [INF] STREAM: Store location: /datastore
[1] 2019/02/26 08:16:07.933633 [INF] STREAM: ---------- Store Limits ----------
[1] 2019/02/26 08:16:07.933679 [INF] STREAM: Channels: 100 *
[1] 2019/02/26 08:16:07.933697 [INF] STREAM: --------- Channels Limits --------
[1] 2019/02/26 08:16:07.933711 [INF] STREAM: Subscriptions: 1000 *
[1] 2019/02/26 08:16:07.933749 [INF] STREAM: Messages : 1000000 *
[1] 2019/02/26 08:16:07.933793 [INF] STREAM: Bytes : 976.56 MB *
[1] 2019/02/26 08:16:07.933837 [INF] STREAM: Age : unlimited *
[1] 2019/02/26 08:16:07.933857 [INF] STREAM: Inactivity : unlimited *
[1] 2019/02/26 08:16:07.933885 [INF] STREAM: ----------------------------------
PS
- 如果部署于k8s平台中,则可以通过配置文件系统持久化的策略,在虚拟磁盘上挂载一块存储设备以实现数据的一致性和可靠性。
- 例如,在AWS平台中使用EBS卷或者在CEPH系统中使用RBD卷即可实现这一功能。
- 4355号端口主要用于客户端发起连接请求。
- 4366号端口则负责接收服务器返回的状态响应信息。
- 4377号端口则负责接收服务器返回的数据响应信息。
- 5999号IP地址作为服务监听地址。
- 6000号IP地址作为服务监听地址。
启动以后访问:localhost:8222,可以看到如下的网页:

启动参数解析
Streaming Server Options:
-cid, --cluster_id <string> Cluster ID (default: test-cluster)
-st, --store <string> Store type: MEMORY|FILE|SQL (default: MEMORY)
--dir <string> For FILE store type, this is the root directory
-mc, --max_channels <int> Max number of channels (0 for unlimited)
-msu, --max_subs <int> Max number of subscriptions per channel (0 for unlimited)
-mm, --max_msgs <int> Max number of messages per channel (0 for unlimited)
-mb, --max_bytes <size> Max messages total size per channel (0 for unlimited)
-ma, --max_age <duration> Max duration a message can be stored ("0s" for unlimited)
-mi, --max_inactivity <duration> Max inactivity (no new message, no subscription) after which a channel can be garbage collected (0 for unlimited)
-ns, --nats_server <string> Connect to this external NATS Server URL (embedded otherwise)
-sc, --stan_config <string> Streaming server configuration file
-hbi, --hb_interval <duration> Interval at which server sends heartbeat to a client
-hbt, --hb_timeout <duration> How long server waits for a heartbeat response
-hbf, --hb_fail_count <int> Number of failed heartbeats before server closes the client connection
--ft_group <string> Name of the FT Group. A group can be 2 or more servers with a single active server and all sharing the same datastore
-sl, --signal <signal>[=<pid>] Send signal to nats-streaming-server process (stop, quit, reopen)
--encrypt <bool> Specify if server should use encryption at rest
--encryption_cipher <string> Cipher to use for encryption. Currently support AES and CHAHA (ChaChaPoly). Defaults to AES
--encryption_key <sting> Encryption Key. It is recommended to specify it through the NATS_STREAMING_ENCRYPTION_KEY environment variable instead
Streaming Server Clustering Options:
--clustered <bool> Run the server in a clustered configuration (default: false)
--cluster_node_id <string> ID of the node within the cluster if there is no stored ID (default: random UUID)
--cluster_bootstrap <bool> Bootstrap the cluster if there is no existing state by electing self as leader (default: false)
--cluster_peers <string> List of cluster peer node IDs to bootstrap cluster state.
--cluster_log_path <string> Directory to store log replication data
--cluster_log_cache_size <int> Number of log entries to cache in memory to reduce disk IO (default: 512)
--cluster_log_snapshots <int> Number of log snapshots to retain (default: 2)
--cluster_trailing_logs <int> Number of log entries to leave after a snapshot and compaction
--cluster_sync <bool> Do a file sync after every write to the replication log and message store
--cluster_raft_logging <bool> Enable logging from the Raft library (disabled by default)
Streaming Server File Store Options:
--file_compact_enabled <bool> Enable file compaction
--file_compact_frag <int> File fragmentation threshold for compaction
--file_compact_interval <int> Minimum interval (in seconds) between file compactions
--file_compact_min_size <size> Minimum file size for compaction
--file_buffer_size <size> File buffer size (in bytes)
--file_crc <bool> Enable file CRC-32 checksum
--file_crc_poly <int> Polynomial used to make the table used for CRC-32 checksum
--file_sync <bool> Enable File.Sync on Flush
--file_slice_max_msgs <int> Maximum number of messages per file slice (subject to channel limits)
--file_slice_max_bytes <size> Maximum file slice size - including index file (subject to channel limits)
--file_slice_max_age <duration> Maximum file slice duration starting when the first message is stored (subject to channel limits)
--file_slice_archive_script <string> Path to script to use if you want to archive a file slice being removed
--file_fds_limit <int> Store will try to use no more file descriptors than this given limit
--file_parallel_recovery <int> On startup, number of channels that can be recovered in parallel
--file_truncate_bad_eof <bool> Truncate files for which there is an unexpected EOF on recovery, dataloss may occur
Streaming Server SQL Store Options:
--sql_driver <string> Name of the SQL Driver ("mysql" or "postgres")
--sql_source <string> Datasource used when opening an SQL connection to the database
--sql_no_caching <bool> Enable/Disable caching for improved performance
--sql_max_open_conns <int> Maximum number of opened connections to the database
Streaming Server TLS Options:
-secure <bool> Use a TLS connection to the NATS server without
verification; weaker than specifying certificates.
-tls_client_key <string> Client key for the streaming server
-tls_client_cert <string> Client certificate for the streaming server
-tls_client_cacert <string> Client certificate CA for the streaming server
Streaming Server Logging Options:
-SD, --stan_debug=<bool> Enable STAN debugging output
-SV, --stan_trace=<bool> Trace the raw STAN protocol
-SDV Debug and trace STAN
--syslog_name On Windows, when running several servers as a service, use this name for the event source
(See additional NATS logging options below)
Embedded NATS Server Options:
-a, --addr <string> Bind to host address (default: 0.0.0.0)
-p, --port <int> Use port for clients (default: 4222)
-P, --pid <string> File to store PID
-m, --http_port <int> Use port for http monitoring
-ms,--https_port <int> Use port for https monitoring
-c, --config <string> Configuration file
Logging Options:
-l, --log <string> File to redirect log output
-T, --logtime=<bool> Timestamp log entries (default: true)
-s, --syslog <string> Enable syslog as log method
-r, --remote_syslog <string> Syslog server addr (udp://localhost:514)
-D, --debug=<bool> Enable debugging output
-V, --trace=<bool> Trace the raw protocol
-DV Debug and trace
Authorization Options:
--user <string> User required for connections
--pass <string> Password required for connections
--auth <string> Authorization token required for connections
TLS Options:
--tls=<bool> Enable TLS, do not verify clients (default: false)
--tlscert <string> Server certificate file
--tlskey <string> Private key for server certificate
--tlsverify=<bool> Enable TLS, verify client certificates
--tlscacert <string> Client certificate CA for verification
NATS Clustering Options:
--routes <string, ...> Routes to solicit and connect
--cluster <string> Cluster URL for solicited routes
Common Options:
-h, --help Show this message
-v, --version Show version
--help_tls TLS help.
源码简单分析NATS Streaming 持久化
目前NATS Streaming支持以下4种持久化方式:
- MEMORY
- FILE
- SQL
- RAFT
通过查看源码可以看出:NATS Streaming的store以接口为基础实现,并且具有良好的扩展性。具体的接口如下:
// Store is the storage interface for NATS Streaming servers.
//
// If an implementation has a Store constructor with StoreLimits, it should be
// noted that the limits don't apply to any state being recovered, for Store
// implementations supporting recovery.
//
type Store interface {
// GetExclusiveLock is an advisory lock to prevent concurrent
// access to the store from multiple instances.
// This is not to protect individual API calls, instead, it
// is meant to protect the store for the entire duration the
// store is being used. This is why there is no `Unlock` API.
// The lock should be released when the store is closed.
//
// If an exclusive lock can be immediately acquired (that is,
// it should not block waiting for the lock to be acquired),
// this call will return `true` with no error. Once a store
// instance has acquired an exclusive lock, calling this
// function has no effect and `true` with no error will again
// be returned.
//
// If the lock cannot be acquired, this call will return
// `false` with no error: the caller can try again later.
//
// If, however, the lock cannot be acquired due to a fatal
// error, this call should return `false` and the error.
//
// It is important to note that the implementation should
// make an effort to distinguish error conditions deemed
// fatal (and therefore trying again would invariably result
// in the same error) and those deemed transient, in which
// case no error should be returned to indicate that the
// caller could try later.
//
// Implementations that do not support exclusive locks should
// return `false` and `ErrNotSupported`.
GetExclusiveLock() (bool, error)
// Init can be used to initialize the store with server's information.
Init(info *spb.ServerInfo) error
// Name returns the name type of this store (e.g: MEMORY, FILESTORE, etc...).
Name() string
// Recover returns the recovered state.
// Implementations that do not persist state and therefore cannot
// recover from a previous run MUST return nil, not an error.
// However, an error must be returned for implementations that are
// attempting to recover the state but fail to do so.
Recover() (*RecoveredState, error)
// SetLimits sets limits for this store. The action is not expected
// to be retroactive.
// The store implementation should make a deep copy as to not change
// the content of the structure passed by the caller.
// This call may return an error due to limits validation errors.
SetLimits(limits *StoreLimits) error
// GetChannelLimits returns the limit for this channel. If the channel
// does not exist, returns nil.
GetChannelLimits(name string) *ChannelLimits
// CreateChannel creates a Channel.
// Implementations should return ErrAlreadyExists if the channel was
// already created.
// Limits defined for this channel in StoreLimits.PeChannel map, if present,
// will apply. Otherwise, the global limits in StoreLimits will apply.
CreateChannel(channel string) (*Channel, error)
// DeleteChannel deletes a Channel.
// Implementations should make sure that if no error is returned, the
// channel would not be recovered after a restart, unless CreateChannel()
// with the same channel is invoked.
// If processing is expecting to be time consuming, work should be done
// in the background as long as the above condition is guaranteed.
// It is also acceptable for an implementation to have CreateChannel()
// return an error if background deletion is still happening for a
// channel of the same name.
DeleteChannel(channel string) error
// AddClient stores information about the client identified by `clientID`.
AddClient(info *spb.ClientInfo) (*Client, error)
// DeleteClient removes the client identified by `clientID` from the store.
DeleteClient(clientID string) error
// Close closes this store (including all MsgStore and SubStore).
// If an exclusive lock was acquired, the lock shall be released.
Close() error
}
官方也提供了mysql和pgsql两种数据的支持:
postgres.db.sql
CREATE TABLE IF NOT EXISTS ServerInfo (uniquerow INTEGER DEFAULT 1, id VARCHAR(1024), proto BYTEA, version INTEGER, PRIMARY KEY (uniquerow));
CREATE TABLE IF NOT EXISTS Clients (id VARCHAR(1024), hbinbox TEXT, PRIMARY KEY (id));
CREATE TABLE IF NOT EXISTS Channels (id INTEGER, name VARCHAR(1024) NOT NULL, maxseq BIGINT DEFAULT 0, maxmsgs INTEGER DEFAULT 0, maxbytes BIGINT DEFAULT 0, maxage BIGINT DEFAULT 0, deleted BOOL DEFAULT FALSE, PRIMARY KEY (id));
CREATE INDEX Idx_ChannelsName ON Channels (name(256));
CREATE TABLE IF NOT EXISTS Messages (id INTEGER, seq BIGINT, timestamp BIGINT, size INTEGER, data BYTEA, CONSTRAINT PK_MsgKey PRIMARY KEY(id, seq));
CREATE INDEX Idx_MsgsTimestamp ON Messages (timestamp);
CREATE TABLE IF NOT EXISTS Subscriptions (id INTEGER, subid BIGINT, lastsent BIGINT DEFAULT 0, proto BYTEA, deleted BOOL DEFAULT FALSE, CONSTRAINT PK_SubKey PRIMARY KEY(id, subid));
CREATE TABLE IF NOT EXISTS SubsPending (subid BIGINT, row BIGINT, seq BIGINT DEFAULT 0, lastsent BIGINT DEFAULT 0, pending BYTEA, acks BYTEA, CONSTRAINT PK_MsgPendingKey PRIMARY KEY(subid, row));
CREATE INDEX Idx_SubsPendingSeq ON SubsPending (seq);
CREATE TABLE IF NOT EXISTS StoreLock (id VARCHAR(30), tick BIGINT DEFAULT 0);
-- Updates for 0.10.0
ALTER TABLE Clients ADD proto BYTEA;
mysql.db.sql
CREATE TABLE IF NOT EXISTS ServerInfo (uniquerow INT DEFAULT 1, id VARCHAR(1024), proto BLOB, version INTEGER, PRIMARY KEY (uniquerow));
CREATE TABLE IF NOT EXISTS Clients (id VARCHAR(1024), hbinbox TEXT, PRIMARY KEY (id(256)));
CREATE TABLE IF NOT EXISTS Channels (id INTEGER, name VARCHAR(1024) NOT NULL, maxseq BIGINT UNSIGNED DEFAULT 0, maxmsgs INTEGER DEFAULT 0, maxbytes BIGINT DEFAULT 0, maxage BIGINT DEFAULT 0, deleted BOOL DEFAULT FALSE, PRIMARY KEY (id), INDEX Idx_ChannelsName (name(256)));
CREATE TABLE IF NOT EXISTS Messages (id INTEGER, seq BIGINT UNSIGNED, timestamp BIGINT, size INTEGER, data BLOB, CONSTRAINT PK_MsgKey PRIMARY KEY(id, seq), INDEX Idx_MsgsTimestamp (timestamp));
CREATE TABLE IF NOT EXISTS Subscriptions (id INTEGER, subid BIGINT UNSIGNED, lastsent BIGINT UNSIGNED DEFAULT 0, proto BLOB, deleted BOOL DEFAULT FALSE, CONSTRAINT PK_SubKey PRIMARY KEY(id, subid));
CREATE TABLE IF NOT EXISTS SubsPending (subid BIGINT UNSIGNED, `row` BIGINT UNSIGNED, seq BIGINT UNSIGNED DEFAULT 0, lastsent BIGINT UNSIGNED DEFAULT 0, pending BLOB, acks BLOB, CONSTRAINT PK_MsgPendingKey PRIMARY KEY(subid, `row`), INDEX Idx_SubsPendingSeq(seq));
CREATE TABLE IF NOT EXISTS StoreLock (id VARCHAR(30), tick BIGINT UNSIGNED DEFAULT 0);
# Updates for 0.10.0
ALTER TABLE Clients ADD proto BLOB;
总结
后续将会对代码实现及相应的集群构建进行详细阐述,并对如何在K8s上构建高可用性的集群进行深入分析。
参阅文章:
