如何使用Spring配置多数据源实现读写分离?
随着业务不断发展, 数据量变得越来越大, 数据库的压力也与日俱增, 这时我们会考虑SQL优化, 分表分库, 读写分离这些策略来提高数据库的性能. 本篇博客这里只介绍读写分离策略, 什么是读写分离策略呢?
比如我们把数据库分为一个Master库, n个Slave库, Master库用来处理写操作和实时性高的读操作, Slave库用来处理实时性不高的读操作, 这样就有效地减轻数据库压力.
使用Spring实现读写分离
技术
Spring
Mybatis
MySQL
方案
以前我们都是直接配置c3p0, druid等这些第三方数据源, 现在我们需要使用自定义的数据源.
(1) 以前的配置
"dataSource" class="com.alibaba.druid.pool.DruidDataSource">
"driverClassName" value="${jdbc.driver}"/>
"url" value="${jdbc.url}"/>
"username" value="${jdbc.username}"/>
"password" value="${jdbc.password}"/>
"sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
"dataSource" ref="dataSource"/>
"mapperLocations" value="classpath:mapper/*.xml"/>
(2) 现在的配置
<bean id="dataSourceMaster" class="com.alibaba.druid.pool.DruidDataSource">
<property name="driverClassName" value="${jdbc.driver}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
bean>
<bean id="dataSourceSlave1" class="com.alibaba.druid.pool.DruidDataSource">
<property name="driverClassName" value="${jdbc.driver}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
bean>
<bean id="dataSourceSlave2" class="com.alibaba.druid.pool.DruidDataSource">
<property name="driverClassName" value="${jdbc.driver}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
bean>
<bean id="dataSource" class="org.tyshawn.muti_datasource.DynamicDataSource">
<property name="defaultTargetDataSource" ref="dataSourceMaster"/>
<property name="targetDataSources">
<map key-type="java.lang.String">
<entry key="master" value-ref="dataSourceMaster"/>
<entry key="slave1" value-ref="dataSourceSlave1"/>
<entry key="slave2" value-ref="dataSourceSlave2"/>
map>
property>
bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="mapperLocations" value="classpath:mapper/*.xml"/>
bean>
DynamicDataSource类就是我们自己定义的数据源, 它需要继承AbstractRoutingDataSource类, 重写其中一部分的方法. 我们先来看下AbstractRoutingDataSource类的内部实现.
public abstract class AbstractRoutingDataSource extends AbstractDataSource implements InitializingBean {
private Map<Object, Object> targetDataSources;
private Object defaultTargetDataSource;
private Map<Object, DataSource> resolvedDataSources;
private DataSource resolvedDefaultDataSource;
public AbstractRoutingDataSource() {
}
public void setTargetDataSources(Map<Object, Object> targetDataSources) {
this.targetDataSources = targetDataSources;
}
public void setDefaultTargetDataSource(Object defaultTargetDataSource) {
this.defaultTargetDataSource = defaultTargetDataSource;
}
public void afterPropertiesSet() {
if (this.targetDataSources == null) {
throw new IllegalArgumentException("Property 'targetDataSources' is required");
} else {
this.resolvedDataSources = new HashMap(this.targetDataSources.size());
Iterator var1 = this.targetDataSources.entrySet().iterator();
while(var1.hasNext()) {
Entry<Object, Object> entry = (Entry)var1.next();
Object lookupKey = this.resolveSpecifiedLookupKey(entry.getKey());
DataSource dataSource = this.resolveSpecifiedDataSource(entry.getValue());
this.resolvedDataSources.put(lookupKey, dataSource);
}
if (this.defaultTargetDataSource != null) {
this.resolvedDefaultDataSource = this.resolveSpecifiedDataSource(this.defaultTargetDataSource);
}
}
}
protected DataSource determineTargetDataSource() {
Assert.notNull(this.resolvedDataSources, "DataSource router not initialized");
Object lookupKey = this.determineCurrentLookupKey();
DataSource dataSource = (DataSource)this.resolvedDataSources.get(lookupKey);
if (dataSource == null && (this.lenientFallback || lookupKey == null)) {
dataSource = this.resolvedDefaultDataSource;
}
if (dataSource == null) {
throw new IllegalStateException("Cannot determine target DataSource for lookup key [" + lookupKey + "]");
} else {
return dataSource;
}
}
protected abstract Object determineCurrentLookupKey();
}
上面是AbstractRoutingDataSource类的核心代码, 为了便于阅读, 我删除了一部分代码. 在配置文件中我们配置了defaultTargetDataSource和targetDataSources的初始值. AbstractRoutingDataSource的实现类在初始化时会执行afterPropertiesSet()方法, 其功能是将defaultTargetDataSource转化为resolvedDefaultDataSource, 将targetDataSources转化为resolvedDataSources. 请求每一次调用数据源时都会执行determineTargetDataSource()方法, 它决定访问的数据源是哪个.
determineTargetDataSource()方法是如何决定访问哪个数据源的呢? 注意下面这两行代码
Object lookupKey = this.determineCurrentLookupKey();
DataSource dataSource = (DataSource)this.resolvedDataSources.get(lookupKey);
通过调用determineCurrentLookupKey()方法来获取key值, 然后根据key值从resolvedDataSources中获取数据源. 我们要做的就是重写determineCurrentLookupKey()方法, 根据不同的情况返回不同的数据源key值.
会不会有人不清楚数据源key值是什么? 就是我们在配置文件配置的key, 如下所示:
实现
(1) 定义一个数据源类型工具类
这个类的核心在于 ThreadLocal< String> dataSourceTypes, 它的默认值为master. 在线程并发情况下, 每个线程拿到dataSourceTypes时的初始值都是master.
public class DataSourceTypeManager {
/**
* 数据源类型
*/
private static final String MASTER = "master";
private static final String SLAVE = "slave";
/**
* 当前使用的数据源类型
*/
private static final ThreadLocal<String> dataSourceTypes = new ThreadLocal<String>(){
@Override
protected String initialValue(){
return MASTER;
}
};
public static ThreadLocal<String> getDataSourceType(){
return dataSourceTypes;
}
public static void setSlave() {
dataSourceTypes.set(SLAVE);
}
public static boolean isMaster(Object dataSourceType) {
return dataSourceType.equals(MASTER);
}
}
(2) 自定义数据源
前面说自定义数据源只需要重写determineCurrentLookupKey()方法就行了, 那为什么这里还重写了setTargetDataSources()方法呢? 因为我们需要进行一些额外的操作, 获取slave数据源的个数和slave数据源列表.
我们来看下重写determineCurrentLookupKey()方法做了些什么事情. 首先获取DataSourceTypeManager的dataSourceTypes值, 如果是master则不做额外处理, 如果是slave则从slave数据源列表中轮询一个slave出来. 注意这里有一个细节很重要, ThreadLocal对象用完之后要调用remove操作, 避免线程复用导致的线程安全问题.
public class DynamicDataSource extends AbstractRoutingDataSource {
private static Logger logger = Logger.getLogger(DynamicDataSource.class);
/**
* 轮询计数器
*/
private AtomicInteger counter = new AtomicInteger(-1);
/**
* slave个数
*/
private Integer slaveCount;
/**
* slave数据源列表
*/
private List<Object> slaveDataSourceTypes = new ArrayList<>();
/**
* 重写setTargetDataSources()方法, 获取slave个数和slave数据源列表
* @param targetDataSources
*/
@Override
public void setTargetDataSources(Map<Object, Object> targetDataSources) {
super.setTargetDataSources(targetDataSources);
//去除一个master
this.slaveCount = targetDataSources.size() -1;
//获取slave数据源列表
for (Map.Entry<Object, Object> entry : targetDataSources.entrySet()) {
if (DataSourceTypeManager.isMaster(entry.getKey())) {
continue;
}
slaveDataSourceTypes.add(entry.getKey());
}
}
@Override
protected Object determineCurrentLookupKey() {
Object lookupKey;
String datasourceType = DataSourceTypeManager.getDataSourceType().get();
if (DataSourceTypeManager.isMaster(datasourceType)) {
lookupKey = datasourceType;
}else {
lookupKey = getSlaveKey();
}
//避免线程复用导致的线程安全问题
DataSourceTypeManager.getDataSourceType().remove();
logger.info("当前使用的数据源: " + lookupKey);
return lookupKey;
}
/**
* 轮询获取slave数据源
* @return
*/
public Object getSlaveKey() {
int index = counter.incrementAndGet();
if (index == slaveCount - 1) {
counter.set(-1);
}
return slaveDataSourceTypes.get(index);
}
}
(3) 使用Spring AOP来控制使用哪种数据源
切入点为service层中的读操作. 定义一个前置增强, 当调用service层中的读操作时, 修改DataSourceTypeManager的dataSourceTypes值为slave.
@Component("dataSourceTypeAspect")
public class DataSourceTypeAspect {
/**
* 前置增强
* 切换到slave数据源
*/
public void before() {
DataSourceTypeManager.setSlave();
}
}
<aop:config>
<aop:aspect ref="dataSourceTypeAspect" order="0">
<aop:pointcut expression="execution(* org.tyshawn.service.impl.UserServiceImpl.getById(..))" id="myPointCut"/>
<aop:before method="before" pointcut-ref="myPointCut"/>
aop:aspect>
aop:config>
(4) 完整Spring配置
在配置AOP时注意一个细节, 切换数据源的AOP执行顺序要在事务之前, 所以要设置order属性.
<beans>
<context:property-placeholder location="classpath:jdbc/jdbc.properties"/>
<context:component-scan base-package="org.tyshawn"/>
<bean id="dataSourceMaster" class="com.alibaba.druid.pool.DruidDataSource">
<property name="driverClassName" value="${jdbc.driver}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
bean>
<bean id="dataSourceSlave1" class="com.alibaba.druid.pool.DruidDataSource">
<property name="driverClassName" value="${jdbc.driver}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
bean>
<bean id="dataSourceSlave2" class="com.alibaba.druid.pool.DruidDataSource">
<property name="driverClassName" value="${jdbc.driver}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
bean>
<bean id="dataSource" class="org.tyshawn.muti_datasource.DynamicDataSource">
<property name="defaultTargetDataSource" ref="dataSourceMaster"/>
<property name="targetDataSources">
<map key-type="java.lang.String">
<entry key="master" value-ref="dataSourceMaster"/>
<entry key="slave1" value-ref="dataSourceSlave1"/>
<entry key="slave2" value-ref="dataSourceSlave2"/>
map>
property>
bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="mapperLocations" value="classpath:mapper/*.xml"/>
bean>
<bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
<property name="sqlSessionFactoryBeanName" value="sqlSessionFactory"/>
<property name="basePackage" value="org.tyshawn.dao"/>
bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource">property>
bean>
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="save*" propagation="REQUIRED" rollback-for="Throwable"/>
<tx:method name="update*" propagation="REQUIRED" rollback-for="Throwable"/>
<tx:method name="remove*" propagation="REQUIRED" rollback-for="Throwable"/>
<tx:method name="del*" propagation="REQUIRED" rollback-for="Throwable"/>
tx:attributes>
tx:advice>
<aop:config>
<aop:advisor advice-ref="txAdvice" pointcut="execution(* org.tyshawn.service.*.*(..))" order="1"/>
<aop:aspect ref="dataSourceTypeAspect" order="0">
<aop:pointcut expression="execution(* org.tyshawn.service.impl.UserServiceImpl.getById(..))" id="myPointCut"/>
<aop:before method="before" pointcut-ref="myPointCut"/>
aop:aspect>
aop:config>
beans>
(5) jdbc.properties
jdbc.driver=com.mysql.jdbc.Driver
jdbc.url=jdbc:mysql://127.0.0.1:3306/tyshawn_test?useUnicode=true&characterEncoding=utf-8
jdbc.username=root
jdbc.password=123
改进一
上文使用Spring AOP来控制使用哪种数据源, 如果调用service的读操作则使用slave数据源, 如果调用service的写操作则使用master数据源. 这种场景太过于理想化, 我们可以思考下, 在现实场景中, 如果一个接口中的逻辑是先读后写, 当这个接口存在大量并发调用时会出现什么问题? 会导致读操作读不到最新的数据, 毕竟MySQL的主从同步不是实时的.
我们想要的结果是让Master库用来处理写操作和实时性高的读操作, Slave库用来处理实时性不高读操作. 代码层面做的改进就是, 使用过滤器来代替Spring AOP, 只有对实时性要求不高的读接口才使用slave数据源, 否则都使用master数据源.
(1) 过滤器
过滤器中定义了一个url集合, 集合中存储的是应用中的读接口. 每次请求进入该过滤器时先判断url是否在集合中, 如果在则使用slave数据源, 不在则使用master数据源.
public class ReadAndWriteCheckFilter implements Filter {
private static final Set READ_URLS = new HashSet<>();
@Override
public void init(FilterConfig filterConfig) throws ServletException {
READ_URLS.add("/springmvc/show.do");
// ...
}
@Override
public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
HttpServletRequest request = (HttpServletRequest) req;
//只读接口切换到slave数据源
String uri = request.getRequestURI();
if (READ_URLS.contains(uri)) {
DataSourceTypeManager.setSlave();
}
chain.doFilter(req, res);
}
@Override
public void destroy() {}
}
(2) spring配置
去除掉之前的AOP配置, 只存在事务.
<aop:config>
<aop:advisor advice-ref="txAdvice" pointcut="execution(* org.tyshawn.service.*.*(..))"/>
aop:config>
(3) web.xml
配置过滤器.
<filter>
<filter-name>ReadAndWriteCheckFilterfilter-name>
<filter-class>org.tyshawn.muti_datasource.ReadAndWriteCheckFilterfilter-class>
filter>
<filter-mapping>
<filter-name>ReadAndWriteCheckFilterfilter-name>
<url-pattern>/*url-pattern>
filter-mapping>
改进二
在上文改进一的基础上还可以进行改进, 比如在真实业务场景中, 我们的数据源肯定不是手动配置在Spring配置文件中的, 而是将数据源信息记录在某台服务器上的一个属性文件中, 我们写代码来读取数据源信息, 然后动态创建数据源. 而且每个slave数据源都有权重, 这时要使用加权随机算法来获取一个slave数据源. 此外数据源信息可能不只有一套, 而是多套, 我们要能做到动态切换.
(1) jdbc.properties
此时的数据源配置信息和之前不同了, 其中拥有多套数据源配置, 每套都包含主库和从库的配置, 且从库还配置了权重.
#主库
jdbc.basic.url=jdbc:mysql://127.0.0.1:3306/tyshawn_test?useUnicode=true&characterEncoding=utf-8
jdbc.basic.username=root
jdbc.basic.password=123
#可共用的默认配置
jdbc.basic.driverClassName=com.mysql.jdbc.Driver
jdbc.basic.initialSize=10
jdbc.basic.maxActive=400
jdbc.basic.maxIdle=100
jdbc.basic.minIdle=10
jdbc.basic.minEvictableIdleTimeMillis=500000
jdbc.basic.timeBetweenEvictionRunsMillis=500000
#从库
#数据库ip地址,权重
jdbc.basic.slave.hosts=127.0.0.1:3306[2];127.0.0.1:3306[1]
jdbc.basic.slave.username=root
jdbc.basic.slave.password=123
#第二套...
#第三套...
(2) 配置工具类
public class Configuration {
private static Logger logger = Logger.getLogger(Configuration.class);
private static Properties prop = new Properties();
private static Configuration instance = null;
/**
* jdbc配置文件地址
*/
private static final String JDBC_FILE_URL = "/jdbc/jdbc.properties";
/**
* master的jdbc配置项相关
*/
public static final String JDBC_DRIVER_CLASS_NAME = "jdbc.%s.driverClassName";
public static final String JDBC_URL = "jdbc.%s.url";
public static final String JDBC_USERNAME = "jdbc.%s.username";
public static final String JDBC_PASSWORD = "jdbc.%s.password";
public static final String JDBC_INITIALSIZE = "jdbc.%s.initialSize";
public static final String JDBC_MAXACTIVE = "jdbc.%s.maxActive";
public static final String JDBC_MAXIDLE = "jdbc.%s.maxIdle";
public static final String JDBC_MINIDLE = "jdbc.%s.minIdle";
public static final String JDBC_MINEVICTABLEIDLETIMEMILLIS = "jdbc.%s.minEvictableIdleTimeMillis";
public static final String JDBC_TIMEBETWEENEVICTIONRUNSMILLIS = "jdbc.%s.timeBetweenEvictionRunsMillis";
/**
* slave的jdbc配置项相关
*/
public static final String JDBC_SLAVE_HOSTS = "jdbc.%s.slave.hosts";
public static final String JDBC_SLAVE_USERNAME = "jdbc.%s.slave.username";
public static final String JDBC_SLAVE_PASSWORD = "jdbc.%s.slave.password";
public static final String JDBC_SLAVE_INITIALSIZE = "jdbc.%s.slave.initialSize";
public static final String JDBC_SLAVE_MAXACTIVE = "jdbc.%s.slave.maxActive";
public static final String JDBC_SLAVE_MAXIDLE = "jdbc.%s.slave.maxIdle";
public static final String JDBC_SLAVE_MINIDLE = "jdbc.%s.slave.minIdle";
public static final String JDBC_SLAVE_MINEVICTABLEIDLETIMEMILLIS = "jdbc.%s.slave.minEvictableIdleTimeMillis";
public static final String JDBC_SLAVE_TIMEBETWEENEVICTIONRUNSMILLIS = "jdbc.%s.slave.timeBetweenEvictionRunsMillis";
private Configuration() {
try {
prop.load(this.getClass().getClassLoader().getResourceAsStream(JDBC_FILE_URL));
} catch (IOException e) {
logger.error(e.getMessage(), e);
}
}
/**
* 获取Configuration实例
* @return
*/
public static synchronized Configuration getInstance() {
if (instance == null) {
instance = new Configuration();
}
return instance;
}
/**
* 根据key获取属性
* @param key
* @return
*/
public String loadProperty(String key) {
return prop.getProperty(key);
}
}
(3) 数据源类型工具类
public class DataSourceTypeManager {
/**
* 数据源类型
*/
private static final String MASTER = "master";
private static final String SLAVE = "slave";
/**
* 当前使用的数据源类型
*/
private static final ThreadLocal<String> dataSourceTypes = new ThreadLocal<String>(){
@Override
protected String initialValue(){
return MASTER;
}
};
public static ThreadLocal<String> getDataSourceType(){
return dataSourceTypes;
}
public static void setSlave() {
dataSourceTypes.set(SLAVE);
}
public static boolean isMaster(Object dataSourceType) {
return dataSourceType.equals(MASTER);
}
}
(4) 自定义数据源
和之前自定义数据源不同的地方在于, 这里我们重写了初始化方法afterPropertiesSet(), 在类初始化时先创建数据源, 然后手动设置targetDataSources(), defaultTargetDataSource这里没有再定义了, 需要的可以自己定义. 最后再执行父类的初始化方法.
在调用getSlaveKey()方法获取一个slave数据源时, 这里使用的是加权随机算法, 权值大的数据源访问的几率更大.
代码里有一个地方要注意, 在createDataSource() 这个方法里, 我们是通过 getDruidDataSource() 方法来获取的数据源, 这里使用了 Spring的方法注入功能. 因为DynamicDataSource类是单例的, 而我们在创建数据源时, 每一次获取DruidDataSource对象都必须是一个新的对象.
public class DynamicDataSource extends AbstractRoutingDataSource {
private static Logger logger = Logger.getLogger(DynamicDataSource.class);
private DruidDataSource druidDataSource;
/**
* 配置标记, 用于区分使用哪套数据源
*/
private String prefix = "";
/**
* 全部数据源
*/
private Map<Object, Object> targetDataSources = new HashMap<>();
/**
* slave权值之和
*/
private int weightSum = 0;
/**
* slave数据源类型-权值映射集
*/
private Map<String, Integer> slaveMap = new HashMap<>();
public void setPrefix(String prefix) {
this.prefix = prefix;
}
@Override
public void afterPropertiesSet() {
createDataSource();
super.setTargetDataSources(targetDataSources);
super.afterPropertiesSet();
}
@Override
protected Object determineCurrentLookupKey() {
Object lookupKey;
String datasourceType = DataSourceTypeManager.getDataSourceType().get();
if (DataSourceTypeManager.isMaster(datasourceType)) {
lookupKey = datasourceType;
}else {
lookupKey = getSlaveKey();
}
//避免线程复用导致的线程安全问题
DataSourceTypeManager.getDataSourceType().remove();
logger.info("当前使用的数据源: " + lookupKey);
return lookupKey;
}
/**
* 加载配置配件来动态创建数据源
*/
public void createDataSource() {
Configuration cfg = Configuration.getInstance();
//创建master数据源
DruidDataSource masterDataSource = getDruidDataSource();
masterDataSource.setDriverClassName(cfg.loadProperty(String.format(Configuration.JDBC_DRIVER_CLASS_NAME, prefix)));
masterDataSource.setUrl(cfg.loadProperty(String.format(Configuration.JDBC_URL, prefix)));
masterDataSource.setUsername(cfg.loadProperty(String.format(Configuration.JDBC_USERNAME, prefix)));
masterDataSource.setPassword(cfg.loadProperty(String.format(Configuration.JDBC_PASSWORD, prefix)));
masterDataSource.setInitialSize(NumberUtils.toInt(cfg.loadProperty(String.format(Configuration.JDBC_INITIALSIZE, prefix)), 10));
masterDataSource.setMaxActive(NumberUtils.toInt(cfg.loadProperty(String.format(Configuration.JDBC_MAXACTIVE, prefix)), 200));
masterDataSource.setMaxIdle(NumberUtils.toInt(cfg.loadProperty(String.format(Configuration.JDBC_MAXIDLE, prefix)), 100));
masterDataSource.setMinIdle(NumberUtils.toInt(cfg.loadProperty(String.format(Configuration.JDBC_MINIDLE, prefix)), 10));
masterDataSource.setMinEvictableIdleTimeMillis(NumberUtils.toLong(cfg.loadProperty(String.format(Configuration.JDBC_MINEVICTABLEIDLETIMEMILLIS, prefix)), 500000));
masterDataSource.setTimeBetweenEvictionRunsMillis(NumberUtils.toLong(cfg.loadProperty(String.format(Configuration.JDBC_TIMEBETWEENEVICTIONRUNSMILLIS, prefix)), 500000));
targetDataSources.put("master", masterDataSource);
//创建slave数据源
String hostStr = cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_HOSTS, prefix));
String[] hosts = hostStr.split(";");
for (int i = 0; i < hosts.length; i++) {
String[] hostSplit = hosts[i].split("\\[");
Integer weight = Integer.parseInt(hostSplit[1].split("]")[0]);
String[] urlSplit = masterDataSource.getUrl().split("/");
String url = masterDataSource.getUrl().replace(urlSplit[2], hostSplit[0]);
DruidDataSource slaveDataSource = getDruidDataSource();
slaveDataSource.setDriverClassName(masterDataSource.getDriverClassName());
slaveDataSource.setUrl(url);
slaveDataSource.setUsername(cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_USERNAME, prefix)));
slaveDataSource.setPassword(cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_PASSWORD, prefix)));
slaveDataSource.setInitialSize(NumberUtils.toInt(cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_INITIALSIZE, prefix)), masterDataSource.getInitialSize()));
slaveDataSource.setMaxActive(NumberUtils.toInt(cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_MAXACTIVE, prefix)), masterDataSource.getMaxActive()));
slaveDataSource.setMaxIdle(NumberUtils.toInt(cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_MAXIDLE, prefix)), masterDataSource.getMaxIdle()));
slaveDataSource.setMinIdle(NumberUtils.toInt(cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_MINIDLE, prefix)), masterDataSource.getMinIdle()));
slaveDataSource.setMinEvictableIdleTimeMillis(NumberUtils.toLong(cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_MINEVICTABLEIDLETIMEMILLIS, prefix)), masterDataSource.getMinEvictableIdleTimeMillis()));
slaveDataSource.setTimeBetweenEvictionRunsMillis(NumberUtils.toLong(cfg.loadProperty(String.format(Configuration.JDBC_SLAVE_TIMEBETWEENEVICTIONRUNSMILLIS, prefix)), masterDataSource.getTimeBetweenEvictionRunsMillis()));
targetDataSources.put("slave" + i, slaveDataSource);
slaveMap.put("slave" + i, weight);
weightSum += weight;
}
}
/**
* 加权随机算法获取slave数据源
* @return
*/
public String getSlaveKey() {
int random = new Random().nextInt(weightSum);
for (Map.Entry<String, Integer> entry : slaveMap.entrySet()) {
random -= entry.getValue();
if (random < 0) {
return entry.getKey();
}
}
return null;
}
public DruidDataSource getDruidDataSource() {
return druidDataSource;
}
}
(5) 数据源配置
<bean id="dataSource" class="org.tyshawn.muti_datasource.DynamicDataSource">
<property name="prefix" value="basic"/>
<lookup-method name="getDruidDataSource" bean="baseDataSource">lookup-method>
bean>
<bean id="baseDataSource" class="com.alibaba.druid.pool.DruidDataSource" scope="prototype">
<property name="filters" value="stat" />
<property name="timeBetweenLogStatsMillis">
<value>30000value>
property>
bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="mapperLocations" value="classpath:mapper/*.xml"/>
bean>
(6) 完整Spring配置
<beans>
<context:property-placeholder location="classpath:jdbc/jdbc.properties"/>
<context:component-scan base-package="org.tyshawn"/>
<bean id="dataSource" class="org.tyshawn.muti_datasource.DynamicDataSource">
<property name="prefix" value="basic"/>
<lookup-method name="getDruidDataSource" bean="baseDataSource">lookup-method>
bean>
<bean id="baseDataSource" class="com.alibaba.druid.pool.DruidDataSource" scope="prototype">
<property name="filters" value="stat" />
<property name="timeBetweenLogStatsMillis">
<value>30000value>
property>
bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="mapperLocations" value="classpath:mapper/*.xml"/>
bean>
<bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
<property name="sqlSessionFactoryBeanName" value="sqlSessionFactory"/>
<property name="basePackage" value="org.tyshawn.dao"/>
bean>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource">property>
bean>
<tx:advice id="txAdvice" transaction-manager="transactionManager">
<tx:attributes>
<tx:method name="save*" propagation="REQUIRED" rollback-for="Throwable"/>
<tx:method name="update*" propagation="REQUIRED" rollback-for="Throwable"/>
<tx:method name="remove*" propagation="REQUIRED" rollback-for="Throwable"/>
<tx:method name="del*" propagation="REQUIRED" rollback-for="Throwable"/>
tx:attributes>
tx:advice>
<aop:config>
<aop:advisor advice-ref="txAdvice" pointcut="execution(* org.tyshawn.service.*.*(..))"/>
aop:config>
beans>
(7) 过滤器
public class ReadAndWriteCheckFilter implements Filter {
private static final Set<String> READ_URLS = new HashSet<>();
@Override
public void init(FilterConfig filterConfig) throws ServletException {
READ_URLS.add("/springmvc/show.do");
// ...
}
@Override
public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
HttpServletRequest request = (HttpServletRequest) req;
//只读接口切换到slave数据源
String uri = request.getRequestURI();
if (READ_URLS.contains(uri)) {
DataSourceTypeManager.setSlave();
}
chain.doFilter(req, res);
}
@Override
public void destroy() {}
}
<filter>
<filter-name>ReadAndWriteCheckFilterfilter-name>
<filter-class>org.tyshawn.muti_datasource.ReadAndWriteCheckFilterfilter-class>
filter>
<filter-mapping>
<filter-name>ReadAndWriteCheckFilterfilter-name>
<url-pattern>/*url-pattern>
filter-mapping>
以上就是本篇博客的全部内容, 这里只讲解了一个Master多个Slave的情况, 但如果要配置多个Master的情况, 只需在上面代码的基础上做下修改就行. 还有MysQL主从同步如何配置就不是本篇博客要讲解的内容的, 大家可以自行去百度, 配置很简单.