ES系列(二):基于多播的集群发现实现原理解析

共 37338字,需浏览 75分钟

 ·

2021-05-07 22:52

走过路过不要错过

点击蓝字关注我们


ES作为超强悍的搜索引擎,除了需要具有齐全的功能支持,超高的性能,还必须要有任意扩展的能力。一定程度上,它是一个大数据产品。而要做扩展性,集群自然少不了。然而单独的集群又是不够的,能够做的事情太少(比如通常的集群为负载均衡式对等集群),所以它需要自己组建合适自己的集群。也就是服务需要自动发现,自动协调集群实例。当然,这只是扩展性的第一步。

那么,ES是如何做到集群间的自动发现的呢?本文就一起来探索探索吧。

0. 前情提要

虽然我们想探究的是es的不用配置就可以自动发现的实现原理,但是当你去看新的es的实现时,会惊奇地发现,它已经不再支持这种功能了。即新版本的es不再支持隐式集群发现了,实际上这个功能是在5.0之后取消掉了。

至于为什么会取消该功能,我想可能和可靠性有比较大的关系。当然,这个问题我们抛却不说,只管从理论上来讨论讨论这事即可。

es的2.1版本中,还有相应的集群自动发现功能,我们就以这为参考吧。事实上,在这些已经有实现的版本中,它也只是作为一个插件式存在,即后续版本不再支持,仅是没有发布该插件而已。

而核心原理,自然是多播或者广播。

1:自动发现原理概述

其实平时我们会遇到很多自动发现服务的场景,比如RPC的调用,MQ消息的分发,docker的集群管理。。。

所以,自动发现几乎是一个平常的应用场景,那么,一般它都是是怎么解决的呢?通常,就是有一个注册中心,然后各组件启动后,将自身注册到注册中心,然后由注册中心将消息同步给到使用方,从而让使用方感知这一变化,从而完成自动发现。这几乎是一个通用的解决办法,也很容易理解。

但注册中心会引入一个额外的服务,如果不想带这额外的服务,则可能需要各节点间自行协调,或者说让各自节点都成为可能的注册中心。

注册中心,确实是充当了自动发现的角色,然而如何处理发现之后的步骤,则是各具体应用具体分析的了。所以,除了注册中心这么一个邮递员之外,还必须上下游的配合。

做自动发现的初衷,一是为了能够随时扩容,还有一定程度上的高可用。所以,通常注册中本身就不能成为单点。当然,一般的这种组件都是集群高可用的。为场景而生嘛!

还有就是本文标题所说,使用多播实现动发现。具体原理原理如何,且看下文分解。

2. es集群配置样例

es的配置还是比较简化的,绝大部分都是默认值,只做一些简单的配置即可。甚至对于单机的部署,下载下来什么都不用改,立即就可以运行了。下面我们看两个简单的集群配置样例:(elasticsearch.yml)

# 多播配置下,节点向集群发送多播请求,其他节点收到请求后会做出响应。配置参数如下:discovery.zen.ping.multicast.group:224.5.6.7# 端口discovery.zen.ping.multicast.port:1234  # 广播消息ttldiscovery.zen.ping.multicast.ttl:3 # 绑定的地址,null表示绑定所有可用的网络接口discovery.zen.ping.multicast.address:null# 多播自动发现禁用开关discovery.zen.ping.multicast.enabled:true # master 节点数配置discovery.zen.minimum_master_nodes: 2discovery.zen.ping_timeout: 3s
# 单播配置下,节点向指定的主机发送单播请求,配置如下, 使用单播时可以将多播配置禁用discovery.zen.ping.multicast.enabled:falsediscovery.zen.ping.unicast.hosts: ["172.16.0.2:9300","172.16.0.3:9300","172.16.0.5:9300"]

稍微完整点的配置文件:(供参考)

cluster.name: elasticsearch# 配置es的集群名称,默认是elasticsearch,es会自动发现在同一网段下的es,如果在同一网段下有多个集群,就可以用这个属性来区分不同的集群。node.name: "node1"# 节点名,默认随机指定一个name列表中名字,该列表在es的jar包中config文件夹里name.txt文件中,其中有很多作者添加的有趣名字。node.master: true# 指定该节点是否有资格被选举成为node,默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master。node.data: true# 指定该节点是否存储索引数据,默认为true。# index.number_of_shards: 5# 设置默认索引分片个数,默认为5片。# index.number_of_replicas: 1# 设置默认索引副本个数,默认为1个副本。# path.conf: /path/to/conf# 设置配置文件的存储路径,默认是es根目录下的config文件夹。# path.data: /path/to/data# 设置索引数据的存储路径,默认是es根目录下的data文件夹,可以设置多个存储路径,用逗号隔开,例:# path.data: /path/to/data1,/path/to/data2# path.work: /path/to/work# 设置临时文件的存储路径,默认是es根目录下的work文件夹。# path.logs: /path/to/logs# 设置日志文件的存储路径,默认是es根目录下的logs文件夹# path.plugins: /path/to/plugins# 设置插件的存放路径,默认是es根目录下的plugins文件夹# bootstrap.mlockall: true# bootstrap.memory_lock: true# bootstrap.system_call_filter: false# 设置为true来锁住内存。因为当jvm开始swapping时es的效率会降低,所以要保证它不swap,可以把ES_MIN_MEM和 ES_MAX_MEM两个环境变量设置成同一个值,并且保证机器有足够的内存分配给es。同时也要允许elasticsearch的进程可以锁住内存,linux下可以通过`ulimit -l unlimited`命令。# network.bind_host: 0.0.0.0# 设置绑定的ip地址,可以是ipv4或ipv6的,默认为0.0.0.0。# network.publish_host: 192.168.0.1# 设置其它节点和该节点交互的ip地址,如果不设置它会自动判断,值必须是个真实的ip地址。# network.host: 192.168.0.1# 这个参数是用来同时设置bind_host和publish_host上面两个参数。# transport.tcp.port: 9300# 设置节点间交互的tcp端口,默认是9300。# transport.tcp.compress: true# 设置是否压缩tcp传输时的数据,默认为false,不压缩。# http.port: 9200# 设置对外服务的http端口,默认为9200。# http.max_content_length: 100mb# 设置内容的最大容量,默认100mb# http.enabled: false# 是否使用http协议对外提供服务,默认为true,开启。# gateway.type: local# gateway的类型,默认为local即为本地文件系统,可以设置为本地文件系统,分布式文件系统,Hadoop的HDFS,和amazon的s3服务器。# gateway.recover_after_nodes: 1# 设置集群中N个节点启动时进行数据恢复,默认为1。# gateway.recover_after_time: 5m# 设置初始化数据恢复进程的超时时间,默认是5分钟。# gateway.expected_nodes: 2# 设置这个集群中节点的数量,默认为2,一旦这N个节点启动,就会立即进行数据恢复。# cluster.routing.allocation.node_initial_primaries_recoveries: 4# 初始化数据恢复时,并发恢复线程的个数,默认为4。# cluster.routing.allocation.node_concurrent_recoveries: 2# 添加删除节点或负载均衡时并发恢复线程的个数,默认为4。# indices.recovery.max_size_per_sec: 0# 设置数据恢复时限制的带宽,如入100mb,默认为0,即无限制。# indices.recovery.concurrent_streams: 5# 设置这个参数来限制从其它分片恢复数据时最大同时打开并发流的个数,默认为5。# discovery.zen.minimum_master_nodes: 1# 设置这个参数来保证集群中的节点可以知道其它N个有master资格的节点。默认为1,对于大的集群来说,可以设置大一点的值(2-4)# discovery.zen.ping.timeout: 3s# 设置集群中自动发现其它节点时ping连接超时时间,默认为3秒,对于比较差的网络环境可以高点的值来防止自动发现时出错。# discovery.zen.ping.multicast.enabled: false# #设置是否打开多播发现节点,默认是true。# discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3[portX-portY]"]#设置集群中master节点的初始列表,可以通过这些节点来自动发现新加入集群的节点。#下面是一些查询时的慢日志参数设置# index.search.slowlog.level: TRACE# index.search.slowlog.threshold.query.warn: 10s# index.search.slowlog.threshold.query.info: 5s# index.search.slowlog.threshold.query.debug: 2s# index.search.slowlog.threshold.query.trace: 500ms# index.search.slowlog.threshold.fetch.warn: 1s# index.search.slowlog.threshold.fetch.info: 800ms# index.search.slowlog.threshold.fetch.debug:500ms# index.search.slowlog.threshold.fetch

总之,要简单配置很容易。以上,就可以进行es集群部署了。也就是说已经可以自动发现了,尤其是对于多播的自动发现,你都不用配置。就可以了,即只要名字相同就会被组成同一个集群了,是不是很神奇。

3. ES服务发现实现

本次讨论仅为multicast广播版本的实现,不含其他。

它是以plugin形式接入的,以 discovery.zen.ping.multicast.enabled 作为开关。

// org.elasticsearch.plugin.discovery.multicast.MulticastDiscoveryPluginpublic class MulticastDiscoveryPlugin extends Plugin {
private final Settings settings;
public MulticastDiscoveryPlugin(Settings settings) { this.settings = settings; }
@Override public String name() { return "discovery-multicast"; }
@Override public String description() { return "Multicast Discovery Plugin"; }
public void onModule(DiscoveryModule module) { // 只有将开关打开,才会进行多播发现模块的接入 if (settings.getAsBoolean("discovery.zen.ping.multicast.enabled", false)) { module.addZenPing(MulticastZenPing.class); } }}

所以,所有广播实现相关的东西,就落到了MulticastZenPing的身上了。从构造中方法,我们就可以看到,具体支持的配置参数都有哪些,以默认值如何?

    // org.elasticsearch.plugin.discovery.multicast.MulticastZenPing    public MulticastZenPing(ThreadPool threadPool, TransportService transportService, ClusterName clusterName, Version version) {        this(EMPTY_SETTINGS, threadPool, transportService, clusterName, new NetworkService(EMPTY_SETTINGS), version);    }
@Inject public MulticastZenPing(Settings settings, ThreadPool threadPool, TransportService transportService, ClusterName clusterName, NetworkService networkService, Version version) { super(settings); this.threadPool = threadPool; this.transportService = transportService; this.clusterName = clusterName; this.networkService = networkService; this.version = version; // 广播配置参数读取,及默认值 this.address = this.settings.get("discovery.zen.ping.multicast.address"); this.port = this.settings.getAsInt("discovery.zen.ping.multicast.port", 54328); this.group = this.settings.get("discovery.zen.ping.multicast.group", "224.2.2.4"); this.bufferSize = this.settings.getAsInt("discovery.zen.ping.multicast.buffer_size", 2048); this.ttl = this.settings.getAsInt("discovery.zen.ping.multicast.ttl", 3);
this.pingEnabled = this.settings.getAsBoolean("discovery.zen.ping.multicast.ping.enabled", true);
logger.debug("using group [{}], with port [{}], ttl [{}], and address [{}]", group, port, ttl, address); // 注册业务处理器 MulticastPingResponseRequestHandler 处理 "internal:discovery/zen/multicast" 请求 this.transportService.registerRequestHandler(ACTION_NAME, MulticastPingResponse.class, ThreadPool.Names.SAME, new MulticastPingResponseRequestHandler()); }

构造实例完成后,等待后续的ES进程的start调用。此时,才会进行广播channel的创建,即广播监听与发送。

    // org.elasticsearch.plugin.discovery.multicast.MulticastZenPing.doStart    @Override    protected void doStart() {        try {            // we know OSX has bugs in the JVM when creating multiple instances of multicast sockets            // causing for "socket close" exceptions when receive and/or crashes            boolean shared = settings.getAsBoolean("discovery.zen.ping.multicast.shared", Constants.MAC_OS_X);            // OSX does not correctly send multicasts FROM the right interface            boolean deferToInterface = settings.getAsBoolean("discovery.zen.ping.multicast.defer_group_to_set_interface", Constants.MAC_OS_X);            // 调用本模块的channel工具类,channel相关的操作都由其统一实现            multicastChannel = MulticastChannel.getChannel(nodeName(), shared,                    new MulticastChannel.Config(port, group, bufferSize, ttl,                            // don't use publish address, the use case for that is e.g. a firewall or proxy and                            // may not even be bound to an interface on this machine! use the first bound address.                            networkService.resolveBindHostAddress(address)[0],                            deferToInterface),                    new Receiver());        } catch (Throwable t) {            String msg = "multicast failed to start [{}], disabling. Consider using IPv4 only (by defining env. variable `ES_USE_IPV4`)";            if (logger.isDebugEnabled()) {                logger.debug(msg, t, ExceptionsHelper.detailedMessage(t));            } else {                logger.info(msg, ExceptionsHelper.detailedMessage(t));            }        }    }
// multicast.MulticastChannel.getChannel /** * Builds a channel based on the provided config, allowing to control if sharing a channel that uses * the same config is allowed or not. */ public static MulticastChannel getChannel(String name, boolean shared, Config config, Listener listener) throws Exception { if (!shared) { return new Plain(listener, name, config); } return Shared.getSharedChannel(listener, config); }
// 以简版实现看过程 /** * Simple implementation of a channel. */ @SuppressForbidden(reason = "I bind to wildcard addresses. I am a total nightmare") private static class Plain extends MulticastChannel { private final ESLogger logger; private final Config config;
private volatile MulticastSocket multicastSocket; private final DatagramPacket datagramPacketSend; private final DatagramPacket datagramPacketReceive;
private final Object sendMutex = new Object(); private final Object receiveMutex = new Object();
private final Receiver receiver; private final Thread receiverThread;
Plain(Listener listener, String name, Config config) throws Exception { super(listener); this.logger = ESLoggerFactory.getLogger(name); this.config = config; this.datagramPacketReceive = new DatagramPacket(new byte[config.bufferSize], config.bufferSize); this.datagramPacketSend = new DatagramPacket(new byte[config.bufferSize], config.bufferSize, InetAddress.getByName(config.group), config.port); // 通过multcastSocket 完成广播功能 this.multicastSocket = buildMulticastSocket(config); this.receiver = new Receiver(); this.receiverThread = daemonThreadFactory(Settings.builder().put("name", name).build(), "discovery#multicast#receiver").newThread(receiver); this.receiverThread.start(); }
private MulticastSocket buildMulticastSocket(Config config) throws Exception { SocketAddress addr = new InetSocketAddress(InetAddress.getByName(config.group), config.port); MulticastSocket multicastSocket = new MulticastSocket(config.port); try { multicastSocket.setTimeToLive(config.ttl); // OSX is not smart enough to tell that a socket bound to the // 'lo0' interface needs to make sure to send the UDP packet // out of the lo0 interface, so we need to do some special // workarounds to fix it. if (config.deferToInterface) { // 'null' here tells the socket to deter to the interface set // with .setInterface multicastSocket.joinGroup(addr, null); multicastSocket.setInterface(config.multicastInterface); } else { multicastSocket.setInterface(config.multicastInterface); multicastSocket.joinGroup(InetAddress.getByName(config.group)); } multicastSocket.setReceiveBufferSize(config.bufferSize); multicastSocket.setSendBufferSize(config.bufferSize); multicastSocket.setSoTimeout(60000); } catch (Throwable e) { IOUtils.closeWhileHandlingException(multicastSocket); throw e; } return multicastSocket; }
public Config getConfig() { return this.config; } // 发送广播消息 @Override public void send(BytesReference data) throws Exception { synchronized (sendMutex) { datagramPacketSend.setData(data.toBytes()); multicastSocket.send(datagramPacketSend); } }
@Override protected void close(Listener listener) { receiver.stop(); receiverThread.interrupt(); if (multicastSocket != null) { IOUtils.closeWhileHandlingException(multicastSocket); multicastSocket = null; } try { receiverThread.join(10000); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } // 接收广播消息 private class Receiver implements Runnable {
private volatile boolean running = true;
public void stop() { running = false; }
@Override public void run() { while (running) { try { synchronized (receiveMutex) { try { multicastSocket.receive(datagramPacketReceive); } catch (SocketTimeoutException ignore) { continue; } catch (Exception e) { if (running) { if (multicastSocket.isClosed()) { logger.warn("multicast socket closed while running, restarting..."); multicastSocket = buildMulticastSocket(config); } else { logger.warn("failed to receive packet, throttling...", e); Thread.sleep(500); } } continue; } } // 接收到消息后,监听者进行业务处理 if (datagramPacketReceive.getData().length > 0) { listener.onMessage(new BytesArray(datagramPacketReceive.getData()), datagramPacketReceive.getSocketAddress()); } } catch (Throwable e) { if (running) { logger.warn("unexpected exception in multicast receiver", e); } } } } } }

可以看到,广播消息的实现,是基于java的MulticastSocket进行实现的,也可以看到它是基于udp的,即可靠性并不保证。通过一个死等接收广播消息的receiver线程,实现广播消息的监听,并最终通过listener进行消息的业务处理。所以,广播是框架,而业务核心则是监听者listener的实现了。

而这里的listener则是通过在 MulticastZenPing 中实现的 Receiver 完成的。

    // multicast.MulticastZenPing.Receiver    private class Receiver implements MulticastChannel.Listener {        // 广播消息处理入口        @Override        public void onMessage(BytesReference data, SocketAddress address) {            int id = -1;            DiscoveryNode requestingNodeX = null;            ClusterName clusterName = null;
Map<String, Object> externalPingData = null; XContentType xContentType = null;
try { boolean internal = false; if (data.length() > 4) { int counter = 0; for (; counter < INTERNAL_HEADER.length; counter++) { if (data.get(counter) != INTERNAL_HEADER[counter]) { break; } } if (counter == INTERNAL_HEADER.length) { internal = true; } } if (internal) { StreamInput input = StreamInput.wrap(new BytesArray(data.toBytes(), INTERNAL_HEADER.length, data.length() - INTERNAL_HEADER.length)); Version version = Version.readVersion(input); input.setVersion(version); id = input.readInt(); clusterName = ClusterName.readClusterName(input); requestingNodeX = readNode(input); } else { xContentType = XContentFactory.xContentType(data); if (xContentType != null) { // an external ping try (XContentParser parser = XContentFactory.xContent(xContentType).createParser(data)) { externalPingData = parser.map(); } } else { throw new IllegalStateException("failed multicast message, probably message from previous version"); } } if (externalPingData != null) { handleExternalPingRequest(externalPingData, xContentType, address); } else { handleNodePingRequest(id, requestingNodeX, clusterName); } } catch (Exception e) { if (!lifecycle.started() || (e instanceof EsRejectedExecutionException)) { logger.debug("failed to read requesting data from {}", e, address); } else { logger.warn("failed to read requesting data from {}", e, address); } } }
@SuppressWarnings("unchecked") private void handleExternalPingRequest(Map<String, Object> externalPingData, XContentType contentType, SocketAddress remoteAddress) { if (externalPingData.containsKey("response")) { // ignoring responses sent over the multicast channel logger.trace("got an external ping response (ignoring) from {}, content {}", remoteAddress, externalPingData); return; }
if (multicastChannel == null) { logger.debug("can't send ping response, no socket, from {}, content {}", remoteAddress, externalPingData); return; }
Map<String, Object> request = (Map<String, Object>) externalPingData.get("request"); if (request == null) { logger.warn("malformed external ping request, no 'request' element from {}, content {}", remoteAddress, externalPingData); return; } // 读取广播方的 cluster_name, 如果相同则认为是同一个集群 final String requestClusterName = request.containsKey("cluster_name") ? request.get("cluster_name").toString() : request.containsKey("clusterName") ? request.get("clusterName").toString() : null; if (requestClusterName == null) { logger.warn("malformed external ping request, missing 'cluster_name' element within request, from {}, content {}", remoteAddress, externalPingData); return; }
if (!requestClusterName.equals(clusterName.value())) { logger.trace("got request for cluster_name {}, but our cluster_name is {}, from {}, content {}", requestClusterName, clusterName.value(), remoteAddress, externalPingData); return; } if (logger.isTraceEnabled()) { logger.trace("got external ping request from {}, content {}", remoteAddress, externalPingData); }
try { DiscoveryNode localNode = contextProvider.nodes().localNode();
XContentBuilder builder = XContentFactory.contentBuilder(contentType); builder.startObject().startObject("response"); builder.field("cluster_name", clusterName.value()); builder.startObject("version").field("number", version.number()).field("snapshot_build", version.snapshot).endObject(); builder.field("transport_address", localNode.address().toString());
if (contextProvider.nodeService() != null) { for (Map.Entry<String, String> attr : contextProvider.nodeService().attributes().entrySet()) { builder.field(attr.getKey(), attr.getValue()); } }
builder.startObject("attributes"); for (Map.Entry<String, String> attr : localNode.attributes().entrySet()) { builder.field(attr.getKey(), attr.getValue()); } builder.endObject();
builder.endObject().endObject(); multicastChannel.send(builder.bytes()); if (logger.isTraceEnabled()) { logger.trace("sending external ping response {}", builder.string()); } } catch (Exception e) { logger.warn("failed to send external multicast response", e); } }
private void handleNodePingRequest(int id, DiscoveryNode requestingNodeX, ClusterName requestClusterName) { if (!pingEnabled || multicastChannel == null) { return; } final DiscoveryNodes discoveryNodes = contextProvider.nodes(); final DiscoveryNode requestingNode = requestingNodeX; if (requestingNode.id().equals(discoveryNodes.localNodeId())) { // that's me, ignore return; } if (!requestClusterName.equals(clusterName)) { if (logger.isTraceEnabled()) { logger.trace("[{}] received ping_request from [{}], but wrong cluster_name [{}], expected [{}], ignoring", id, requestingNode, requestClusterName.value(), clusterName.value()); } return; } // don't connect between two client nodes, no need for that... if (!discoveryNodes.localNode().shouldConnectTo(requestingNode)) { if (logger.isTraceEnabled()) { logger.trace("[{}] received ping_request from [{}], both are client nodes, ignoring", id, requestingNode, requestClusterName); } return; } final MulticastPingResponse multicastPingResponse = new MulticastPingResponse(); multicastPingResponse.id = id; multicastPingResponse.pingResponse = new PingResponse(discoveryNodes.localNode(), discoveryNodes.masterNode(), clusterName, contextProvider.nodeHasJoinedClusterOnce());
if (logger.isTraceEnabled()) { logger.trace("[{}] received ping_request from [{}], sending {}", id, requestingNode, multicastPingResponse.pingResponse); } // 加入集群 if (!transportService.nodeConnected(requestingNode)) { // do the connect and send on a thread pool threadPool.generic().execute(new Runnable() { @Override public void run() { // connect to the node if possible try { transportService.connectToNode(requestingNode); transportService.sendRequest(requestingNode, ACTION_NAME, multicastPingResponse, new EmptyTransportResponseHandler(ThreadPool.Names.SAME) { @Override public void handleException(TransportException exp) { logger.warn("failed to receive confirmation on sent ping response to [{}]", exp, requestingNode); } }); } catch (Exception e) { if (lifecycle.started()) { logger.warn("failed to connect to requesting node {}", e, requestingNode); } } } }); } else { transportService.sendRequest(requestingNode, ACTION_NAME, multicastPingResponse, new EmptyTransportResponseHandler(ThreadPool.Names.SAME) { @Override public void handleException(TransportException exp) { if (lifecycle.started()) { logger.warn("failed to receive confirmation on sent ping response to [{}]", exp, requestingNode); } } }); } } }

处理方法就是,收到某个节点的广播消息,则读取集群名,相同则认为是同一集群。发送消息信息,以及连接到该节点,从而保持节点间的通信链路。

还有其他许多细节,略去不说。但我们已经从整体上解答了,es是如何进行自动集群节点发现的了,一个发送广播消息,同一广播组的实例收到消息后,读取cluster_name,从而判定是否是同一集群,进而自动组网。

但我们也要明白,广播消息的不可靠性。在一些可靠性要求很高的场景,往往会发生一些意想不到的事。这可能是我们在用广播消息时要注意的最大问题,通过对比收益与风险,就可以知道是否值得使用该技术了。

4. 广播技术在dubbo中的应用

dubbo中也有应用到广播的场景,但一般仅限于做测试时使用。它基于multicast实现注册中心,和es的应用也算是异曲同工。用广播实现注册中心的最大好处是,无需再引入第三方的组件,即可完成系统的构建,从而减少测试的复杂依赖问题。根据MulticastSocket广播的编程范式,理论上这二者差别不会太大。我们就通过具体的dubbo的广播注册中心实现来验证一番:

    // com.alibaba.dubbo.registry.multicast.MulticastRegistry#MulticastRegistry    public MulticastRegistry(URL url) {        super(url);        if (url.isAnyHost()) {            throw new IllegalStateException("registry address == null");        }        if (! isMulticastAddress(url.getHost())) {            throw new IllegalArgumentException("Invalid multicast address " + url.getHost() + ", scope: 224.0.0.0 - 239.255.255.255");        }        try {            mutilcastAddress = InetAddress.getByName(url.getHost());            mutilcastPort = url.getPort() <= 0 ? DEFAULT_MULTICAST_PORT : url.getPort();            mutilcastSocket = new MulticastSocket(mutilcastPort);            mutilcastSocket.setLoopbackMode(false);            mutilcastSocket.joinGroup(mutilcastAddress);            Thread thread = new Thread(new Runnable() {                public void run() {                    byte[] buf = new byte[2048];                    DatagramPacket recv = new DatagramPacket(buf, buf.length);                    // 一直等待广播消息的到来,即不管是有人上线,有人下线,都会往该广播地址发送消息,当前服务即会收到消息                    while (! mutilcastSocket.isClosed()) {                        try {                            mutilcastSocket.receive(recv);                            // 此处假设最大收到的消息长度为 2048, 如果超出该限制将可能发生未知错误或者被忽略                            // 强制转换为字符串消息                            String msg = new String(recv.getData()).trim();                            int i = msg.indexOf('\n');                            // 只接收一行消息,多出部分将被忽略                            if (i > 0) {                                msg = msg.substring(0, i).trim();                            }                            // 收到广播消息,通知应用,进行业务响应                            MulticastRegistry.this.receive(msg, (InetSocketAddress) recv.getSocketAddress());                            Arrays.fill(buf, (byte)0);                        } catch (Throwable e) {                            if (! mutilcastSocket.isClosed()) {                                logger.error(e.getMessage(), e);                            }                        }                    }                }            }, "DubboMulticastRegistryReceiver");            thread.setDaemon(true);            thread.start();        } catch (IOException e) {            throw new IllegalStateException(e.getMessage(), e);        }        this.cleanPeriod = url.getParameter(Constants.SESSION_TIMEOUT_KEY, Constants.DEFAULT_SESSION_TIMEOUT);        if (url.getParameter("clean", true)) {            this.cleanFuture = cleanExecutor.scheduleWithFixedDelay(new Runnable() {                public void run() {                    try {                        clean(); // 清除过期者                    } catch (Throwable t) { // 防御性容错                        logger.error("Unexpected exception occur at clean expired provider, cause: " + t.getMessage(), t);                    }                }            }, cleanPeriod, cleanPeriod, TimeUnit.MILLISECONDS);        } else {            this.cleanFuture = null;        }    }    // 处理广播消息,作出相应动作    private void receive(String msg, InetSocketAddress remoteAddress) {        if (logger.isInfoEnabled()) {            logger.info("Receive multicast message: " + msg + " from " + remoteAddress);        }        // 注册消息        if (msg.startsWith(Constants.REGISTER)) {            URL url = URL.valueOf(msg.substring(Constants.REGISTER.length()).trim());            registered(url);        }         // 解注册消息        else if (msg.startsWith(Constants.UNREGISTER)) {            URL url = URL.valueOf(msg.substring(Constants.UNREGISTER.length()).trim());            unregistered(url);        }         // 订阅消息        else if (msg.startsWith(Constants.SUBSCRIBE)) {            URL url = URL.valueOf(msg.substring(Constants.SUBSCRIBE.length()).trim());            Set<URL> urls = getRegistered();            if (urls != null && urls.size() > 0) {                for (URL u : urls) {                    if (UrlUtils.isMatch(url, u)) {                        String host = remoteAddress != null && remoteAddress.getAddress() != null                                 ? remoteAddress.getAddress().getHostAddress() : url.getIp();                        if (url.getParameter("unicast", true) // 消费者的机器是否只有一个进程                                && ! NetUtils.getLocalHost().equals(host)) { // 同机器多进程不能用unicast单播信息,否则只会有一个进程收到信息                            // 单播注册消息                            unicast(Constants.REGISTER + " " + u.toFullString(), host);                        } else {                            // 发送广播注册消息                            broadcast(Constants.REGISTER + " " + u.toFullString());                        }                    }                }            }        }/* else if (msg.startsWith(UNSUBSCRIBE)) {        }*/    }
// com.alibaba.dubbo.registry.support.FailbackRegistry#FailbackRegistry public FailbackRegistry(URL url) { super(url); int retryPeriod = url.getParameter(Constants.REGISTRY_RETRY_PERIOD_KEY, Constants.DEFAULT_REGISTRY_RETRY_PERIOD); this.retryFuture = retryExecutor.scheduleWithFixedDelay(new Runnable() { public void run() { // 检测并连接注册中心 try { retry(); } catch (Throwable t) { // 防御性容错 logger.error("Unexpected error occur at failed retry, cause: " + t.getMessage(), t); } } }, retryPeriod, retryPeriod, TimeUnit.MILLISECONDS); } // 广播消息出去(发送消息) private void broadcast(String msg) { if (logger.isInfoEnabled()) { logger.info("Send broadcast message: " + msg + " to " + mutilcastAddress + ":" + mutilcastPort); } try { byte[] data = (msg + "\n").getBytes(); DatagramPacket hi = new DatagramPacket(data, data.length, mutilcastAddress, mutilcastPort); mutilcastSocket.send(hi); } catch (Exception e) { throw new IllegalStateException(e.getMessage(), e); } } // 单播注册消息 private void unicast(String msg, String host) { if (logger.isInfoEnabled()) { logger.info("Send unicast message: " + msg + " to " + host + ":" + mutilcastPort); } try { byte[] data = (msg + "\n").getBytes(); DatagramPacket hi = new DatagramPacket(data, data.length, InetAddress.getByName(host), mutilcastPort); mutilcastSocket.send(hi); } catch (Exception e) { throw new IllegalStateException(e.getMessage(), e); } }

更多信息:

    // 更多实现
protected void doRegister(URL url) { broadcast(Constants.REGISTER + " " + url.toFullString()); }
protected void doUnregister(URL url) { broadcast(Constants.UNREGISTER + " " + url.toFullString()); }
protected void doSubscribe(URL url, NotifyListener listener) { if (Constants.ANY_VALUE.equals(url.getServiceInterface())) { admin = true; } broadcast(Constants.SUBSCRIBE + " " + url.toFullString()); synchronized (listener) { try { listener.wait(url.getParameter(Constants.TIMEOUT_KEY, Constants.DEFAULT_TIMEOUT)); } catch (InterruptedException e) { } } }
protected void doUnsubscribe(URL url, NotifyListener listener) { if (! Constants.ANY_VALUE.equals(url.getServiceInterface()) && url.getParameter(Constants.REGISTER_KEY, true)) { unregister(url); } broadcast(Constants.UNSUBSCRIBE + " " + url.toFullString()); }
public boolean isAvailable() { try { return mutilcastSocket != null; } catch (Throwable t) { return false; } }
public void destroy() { super.destroy(); try { if (cleanFuture != null) { cleanFuture.cancel(true); } } catch (Throwable t) { logger.warn(t.getMessage(), t); } try { mutilcastSocket.leaveGroup(mutilcastAddress); mutilcastSocket.close(); } catch (Throwable t) { logger.warn(t.getMessage(), t); } }
protected void registered(URL url) { for (Map.Entry<URL, Set<NotifyListener>> entry : getSubscribed().entrySet()) { URL key = entry.getKey(); if (UrlUtils.isMatch(key, url)) { Set<URL> urls = received.get(key); if (urls == null) { received.putIfAbsent(key, new ConcurrentHashSet<URL>()); urls = received.get(key); } urls.add(url); List<URL> list = toList(urls); for (NotifyListener listener : entry.getValue()) { notify(key, listener, list); synchronized (listener) { listener.notify(); } } } } }
protected void unregistered(URL url) { for (Map.Entry<URL, Set<NotifyListener>> entry : getSubscribed().entrySet()) { URL key = entry.getKey(); if (UrlUtils.isMatch(key, url)) { Set<URL> urls = received.get(key); if (urls != null) { urls.remove(url); } List<URL> list = toList(urls); for (NotifyListener listener : entry.getValue()) { notify(key, listener, list); } } } }
protected void subscribed(URL url, NotifyListener listener) { List<URL> urls = lookup(url); notify(url, listener, urls); }
private List<URL> toList(Set<URL> urls) { List<URL> list = new ArrayList<URL>(); if (urls != null && urls.size() > 0) { for (URL url : urls) { list.add(url); } } return list; }
public void register(URL url) { super.register(url); registered(url); }
public void unregister(URL url) { super.unregister(url); unregistered(url); }
public void subscribe(URL url, NotifyListener listener) { super.subscribe(url, listener); subscribed(url, listener); }
public void unsubscribe(URL url, NotifyListener listener) { super.unsubscribe(url, listener); received.remove(url); }
public List<URL> lookup(URL url) { List<URL> urls= new ArrayList<URL>(); Map<String, List<URL>> notifiedUrls = getNotified().get(url); if (notifiedUrls != null && notifiedUrls.size() > 0) { for (List<URL> values : notifiedUrls.values()) { urls.addAll(values); } } if (urls == null || urls.size() == 0) { List<URL> cacheUrls = getCacheUrls(url); if (cacheUrls != null && cacheUrls.size() > 0) { urls.addAll(cacheUrls); } } if (urls == null || urls.size() == 0) { for (URL u: getRegistered()) { if (UrlUtils.isMatch(url, u)) { urls.add(u); } } } if (Constants.ANY_VALUE.equals(url.getServiceInterface())) { for (URL u: getSubscribed().keySet()) { if (UrlUtils.isMatch(url, u)) { urls.add(u); } } } return urls; }
public MulticastSocket getMutilcastSocket() { return mutilcastSocket; }
public Map<URL, Set<URL>> getReceived() { return received; }

广播范式:新建一个 MulticastSocket;joinGroup 绑定一个地址;send 发送一条广播消息;后台线程循环接收广播消息;回调业务监听处理广播事务;继续监听广播消息;





往期精彩推荐



腾讯、阿里、滴滴后台面试题汇总总结 — (含答案)

面试:史上最全多线程面试题 !

最新阿里内推Java后端面试题

JVM难学?那是因为你没认真看完这篇文章


END


关注作者微信公众号 —《JAVA烂猪皮》


了解更多java后端架构知识以及最新面试宝典


你点的每个好看,我都认真当成了


看完本文记得给作者点赞+在看哦~~~大家的支持,是作者源源不断出文的动力


作者:等你归去来

出处:https://www.cnblogs.com/yougewe/p/14556859.html

浏览 10
点赞
评论
收藏
分享

手机扫一扫分享

分享
举报
评论
图片
表情
推荐
点赞
评论
收藏
分享

手机扫一扫分享

分享
举报