学畅留学招聘网站开发主管,ps软件免费下载安装,域名注册好了怎么打开网站,男生流出来白色的东西是什么EN
1 项目介绍
基于bahir-flink二次开发#xff0c;相对bahir调整的内容有#xff1a;
1.使用Lettuce替换Jedis,同步读写改为异步读写#xff0c;大幅度提升了性能
2.增加了Table/SQL API#xff0c;增加select/维表join查询支持
3.增加关联查询缓存(支持增量与全量)
4…EN
1 项目介绍
基于bahir-flink二次开发相对bahir调整的内容有
1.使用Lettuce替换Jedis,同步读写改为异步读写大幅度提升了性能
2.增加了Table/SQL API增加select/维表join查询支持
3.增加关联查询缓存(支持增量与全量)
4.增加支持整行保存功能用于多字段的维表关联查询
5.增加限流功能用于Flink SQL在线调试功能
6.增加支持Flink高版本包括1.12,1.13,1.14
7.统一过期策略等
8.支持flink cdc删除及其它RowKind.DELETE
9.支持select查询因bahir使用的flink接口版本较老所以改动较大开发过程中参考了腾讯云与阿里云两家产商的流计算产品取两家之长并增加了更丰富的功能。
注redis不支持两段提交无法实现刚好一次语义。
2 使用方法:
2.1 工程直接引用
项目依赖Lettuce(6.2.1)及netty-transport-native-epoll(4.1.82.Final),如flink环境有这两个包,则使用flink-connector-redis-1.3.2.jar 否则使用flink-connector-redis-1.4.1-jar-with-dependencies.jar。
dependencygroupIdio.github.jeff-zou/groupIdartifactIdflink-connector-redis/artifactId!-- 没有单独引入项目依赖Lettuce netty-transport-native-epoll依赖时 --!-- classifierjar-with-dependencies/classifier--version1.4.1/version
/dependency2.2 自行打包
打包命令 mvn package -DskipTests,将生成的包放入flink lib中即可无需其它设置。
2.3 使用示例
-- 创建redis表示例
create table redis_table (name varchar, age int) with (connectorredis, host10.11.69.176, port6379,passwordtest123, redis-modesingle,commandset);
-- 写入 insert into redis_table select * from (values(test, 1));-- 查询 insert into redis_table select name,age 1 from redis_table /* options(scan.keytest) */create table gen_table (age int , level int, proctime as procTime()) with (connectordatagen,fields.age.kind sequence,fields.age.start 2,fields.age.end 2,fields.level.kind sequence,fields.level.start 10,fields.level.end 10); -- 关联查询
insert into redis_table select test, j.age 10 from gen_table s left join redis_table for system_time as of proctime as j
on j.name test
3 参数说明
3.1 主要参数
字段默认值类型说明connector(none)Stringredishost(none)StringRedis IPport6379IntegerRedis 端口passwordnullString如果没有设置则为 nulldatabase0Integer默认使用 db0timeout2000Integer连接超时时间单位 ms默认 1scluster-nodes(none)String集群ip与端口当redis-mode为cluster时不为空如10.11.80.147:7000,10.11.80.147:7001,10.11.80.147:8000command(none)String对应上文中的redis命令redis-mode(none)Integermode类型 single cluster sentinellookup.cache.max-rows-1Integer查询缓存大小,减少对redis重复key的查询lookup.cache.ttl-1Integer查询缓存过期时间单位为秒 开启查询缓存条件是max-rows与ttl都不能为-1lookup.cache.load-allfalseBoolean开启全量缓存,当命令为hget时,将从redis map查询出所有元素并保存到cache中,用于解决缓存穿透问题max.retries1Integer写入失败重试次数value.data.structurecolumnStringcolumn: value值来自某一字段 (如, set: key值取自DDL定义的第一个字段, value值取自第二个字段) row: 将整行内容保存至value并以’\01’分割set.if.absentfalseBoolean在key不存在时才写入,只对set hset有效io.pool.size(none)IntegerLettuce内netty的io线程池大小,默认情况下该值为当前JVM可用线程数并且大于2event.pool.size(none)IntegerLettuce内netty的event线程池大小 ,默认情况下该值为当前JVM可用线程数并且大于2scan.key(none)String查询时redis keyscan.addition.key(none)String查询时限定redis key,如map结构时的hashfieldscan.range.start(none)Integer查询list结构时指定lrange startscan.range.stop(none)Integer查询list结构时指定lrange startscan.count(none)Integer查询set结构时指定srandmember count
3.1.1 command值与redis命令对应关系
command值写入查询维表关联删除(Flink CDC等产生的RowKind.delete)setsetgetgetdelhsethsethgethgethdelgetsetgetgetdelhsethsethgethgethdelrpushrpushlrangelpushlpushlrangeincrBy incrByFloatincrBy incrByFloatgetget写入相对值如:incrby 2 - incryby -2hincrBy hincryByFloathincrBy hincryByFloathgethget写入相对值如:hincrby 2 - hincryby -2zincrbyzincrbyzscorezscore写入相对值如:zincrby 2 - zincryby -2saddsaddsrandmember 10sremzaddzaddzscorezscorezrempfadd(hyperloglog)pfadd(hyperloglog)publishpublishzremzremzscorezscoresremsremsrandmember 10deldelgetgethdelhdelhgethgetdecrBydecrBygetget
注为空表示不支持
3.1.2 value.data.structure column(默认)
无需通过primary key来映射redis中的Key直接由ddl中的字段顺序来决定Key,如
create table sink_redis(username VARCHAR, passport VARCHAR) with (commandset)
其中username为key, passport为value.create table sink_redis(name VARCHAR, subject VARCHAR, score VARCHAR) with (commandhset)
其中name为map结构的key, subject为field, score为value.3.1.3 value.data.structure row
整行内容保存至value并以’\01’分割
create table sink_redis(username VARCHAR, passport VARCHAR) with (commandset)
其中username为key, username\01passport为value.create table sink_redis(name VARCHAR, subject VARCHAR, score VARCHAR) with (commandhset)
其中name为map结构的key, subject为field, name\01subject\01score为value.3.2 sink时ttl相关参数
FieldDefaultTypeDescriptionttl(none)Integerkey过期时间(秒),每次sink时会设置ttlttl.on.time(none)Stringkey的过期时间点,格式为LocalTime.toString(), eg: 10:00 12:12:01,当ttl未配置时才生效ttl.key.not.absentfalseboolean与ttl一起使用,当key不存在时才设置ttl
3.3 在线调试SQL时用于限制sink资源使用的参数:
FieldDefaultTypeDescriptionsink.limitfalseBoolean是否打开限制sink.limit.max-num10000Integertaskmanager内每个slot可以写的最大数据量sink.limit.interval100Stringtaskmanager内每个slot写入数据间隔 millisecondssink.limit.max-online30 * 60 * 1000LLongtaskmanager内每个slot最大在线时间, milliseconds
3.4 集群类型为sentinel时额外连接参数:
字段默认值类型说明master.name(none)String主名sentinels.info(none)String如10.11.80.147:7000,10.11.80.147:7001,10.11.80.147:8000sentinels.password(none)Stringsentinel进程密码
4 数据类型转换
flink typeredis row converterCHARStringVARCHARStringStringStringBOOLEANString String.valueOf(boolean val) boolean Boolean.valueOf(String str)BINARYString Base64.getEncoder().encodeToString byte[] Base64.getDecoder().decode(String str)VARBINARYString Base64.getEncoder().encodeToString byte[] Base64.getDecoder().decode(String str)DECIMALString BigDecimal.toString DecimalData DecimalData.fromBigDecimal(new BigDecimal(String str),int precision, int scale)TINYINTString String.valueOf(byte val) byte Byte.valueOf(String str)SMALLINTString String.valueOf(short val) short Short.valueOf(String str)INTEGERString String.valueOf(int val) int Integer.valueOf(String str)DATEString the day from epoch as int date show as 2022-01-01TIMEString the millisecond from 0’clock as int time show as 04:04:01.023BIGINTString String.valueOf(long val) long Long.valueOf(String str)FLOATString String.valueOf(float val) float Float.valueOf(String str)DOUBLEString String.valueOf(double val) double Double.valueOf(String str)TIMESTAMPString the millisecond from epoch as long timestamp TimeStampData.fromEpochMillis(Long.valueOf(String str))
5 使用示例: 5.1 维表查询
create table sink_redis(name varchar, level varchar, age varchar) with ( connectorredis, host10.11.80.147,port7001, redis-modesingle,password******,commandhset);-- 先在redis中插入数据,相当于redis命令 hset 3 3 100 --
insert into sink_redis select * from (values (3, 3, 100));create table dim_table (name varchar, level varchar, age varchar) with (connectorredis, host10.11.80.147,port7001, redis-modesingle, password*****,commandhget, maxIdle2, minIdle1, lookup.cache.max-rows10, lookup.cache.ttl10, lookup.max-retries3);-- 随机生成10以内的数据作为数据源 --
-- 其中有一条数据会是 username 3 level 3, 会跟上面插入的数据关联 --
create table source_table (username varchar, level varchar, proctime as procTime()) with (connectordatagen, rows-per-second1, fields.username.kindsequence, fields.username.start1, fields.username.end10, fields.level.kindsequence, fields.level.start1, fields.level.end10);create table sink_table(username varchar, level varchar,age varchar) with (connectorprint);insert intosink_table
selects.username,s.level,d.age
fromsource_table s
left join dim_table for system_time as of s.proctime as d ond.name s.usernameand d.level s.level;
-- username为3那一行会关联到redis内的值输出为 3,3,100 5.2 多字段的维表关联查询
很多情况维表有多个字段,本实例展示如何利用’value.data.structure’row’写多字段并关联查询。
-- 创建表
create table sink_redis(uid VARCHAR,score double,score2 double )
with ( connector redis,host 10.11.69.176,port 6379,redis-mode single,password ****,command SET,value.data.structure row); -- value.data.structurerow:整行内容保存至value并以\01分割
-- 写入测试数据score、score2为需要被关联查询出的两个维度
insert into sink_redis select * from (values (1, 10.3, 10.1));-- 在redis中value的值为: 1\x0110.3\x0110.1 --
-- 写入结束 ---- create join table --
create table join_table with (commandget, value.data.structurerow) like sink_redis-- create result table --
create table result_table(uid VARCHAR, username VARCHAR, score double, score2 double) with (connectorprint)-- create source table --
create table source_table(uid VARCHAR, username VARCHAR, proc_time as procTime()) with (connectordatagen, fields.uid.kindsequence, fields.uid.start1, fields.uid.end2)-- 关联查询维表获得维表的多个字段值 --
insertintoresult_table
selects.uid,s.username,j.score, -- 来自维表j.score2 -- 来自维表
fromsource_table as s
join join_table for system_time as of s.proc_time as j onj.uid s.uidresult:
2 I[2, 1e0fe885a2990edd7f13dd0b81f923713182d5c559b21eff6bda3960cba8df27c69a3c0f26466efaface8976a2e16d9f68b3, null, null]
1 I[1, 30182e00eca2bff6e00a2d5331e8857a087792918c4379155b635a3cf42a53a1b8f3be7feb00b0c63c556641423be5537476, 10.3, 10.1]5.3 DataStream查询方式 示例代码路径: src/test/java/org.apache.flink.streaming.connectors.redis.datastream.DataStreamTest.java hset示例相当于redis命令hset tom math 150 Configuration configuration new Configuration();configuration.setString(REDIS_MODE, REDIS_CLUSTER);configuration.setString(REDIS_COMMAND, RedisCommand.HSET.name());RedisSinkMapper redisMapper (RedisSinkMapper)RedisHandlerServices.findRedisHandler(RedisMapperHandler.class, configuration.toMap()).createRedisMapper(configuration);StreamExecutionEnvironment env StreamExecutionEnvironment.getExecutionEnvironment();GenericRowData genericRowData new GenericRowData(3);genericRowData.setField(0, tom);genericRowData.setField(1, math);genericRowData.setField(2, 152);DataStreamGenericRowData dataStream env.fromElements(genericRowData, genericRowData);RedisSinkOptions redisSinkOptions new RedisSinkOptions.Builder().setMaxRetryTimes(3).build();FlinkConfigBase conf new FlinkSingleConfig.Builder().setHost(REDIS_HOST).setPort(REDIS_PORT).setPassword(REDIS_PASSWORD).build();RedisSinkFunction redisSinkFunction new RedisSinkFunction(conf, redisMapper, redisSinkOptions, resolvedSchema);dataStream.addSink(redisSinkFunction).setParallelism(1);env.execute(RedisSinkTest);5.4 redis-cluster写入示例 示例代码路径: src/test/java/org.apache.flink.streaming.connectors.redis.table.SQLInsertTest.java set示例相当于redis命令 set test test11
StreamExecutionEnvironment env StreamExecutionEnvironment.getExecutionEnvironment();
EnvironmentSettings environmentSettings EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build();
StreamTableEnvironment tEnv StreamTableEnvironment.create(env, environmentSettings);String ddl create table sink_redis(username VARCHAR, passport VARCHAR) with ( connectorredis, cluster-nodes10.11.80.147:7000,10.11.80.147:7001,redis- modecluster,password******,commandset) ;tEnv.executeSql(ddl);
String sql insert into sink_redis select * from (values (test, test11));
TableResult tableResult tEnv.executeSql(sql);
tableResult.getJobClient().get()
.getJobExecutionResult()
.get();6 解决问题联系我
https://github.com/jeff-zou/flink-connector-redis.git
7 开发与测试环境
ide: IntelliJ IDEA
code format: google-java-format Save Actions
code check: CheckStyle
flink 1.12/1.13/1.14
jdk1.8 Lettuce 6.2.1
8 如果需要flink 1.12版本支持请切换到分支flink-1.12(注1.12使用jedis)
dependencygroupIdio.github.jeff-zou/groupIdartifactIdflink-connector-redis/artifactIdversion1.1.1-1.12/version
/dependency