Spark实时(六):Output Sinks案例演示
文章目录
Output Sinks案例演示
当我们对流式数据处理完成之后,可以将数据写出到Flie、Kafka、console控制台、memory内存,或者直接使用foreach做个性化处理。关于将数据结果写出到Kafka在StructuredStreaming与Kafka整合部分再详细描述。
对于一些可以保证端到端容错的sink输出,需要指定checkpoint目录来写入数据信息,指定的checkpoint目录可以是HDFS中的某个路径,设置checkpoint可以通过SparkSession设置也可以通过DataStreamWriter设置,设置方式如下:
//通过SparkSession设置checkpoint
spark.conf.set("spark.sql.streaming.checkpointLocation","hdfs://mycluster/checkpintdir")
或者
//通过DataStreamWriter设置checkpoint
df.writeStream
.format("xxx")
.option("checkpointLocation","./checkpointdir")
.start()
checkpoint目录中会有以下目录及数据:
- offsets:记录偏移量目录,记录了每个批次的偏移量。
- commits:记录已经完成的批次,方便重启任务检查完成的批次与offset批次做对比,继续offset消费数据,运行批次。
- metadata:metadata元数据保存jobid信息。
- sources:数据源各个批次读取详情。
- sinks:数据sink写出批次情况。
- state:记录状态值,例如:聚合、去重等场景会记录相应状态,会周期性的生成snapshot文件记录状态。
下面对File、memoery、foreach output Sink进行演示。
一、File sink
Flie Sink就是数据结果实时写入到执行目录下的文件中,每次写出都会形成一个新的文件,文件格式可以是parquet、orc、json、csv格式。
Scala代码如下:
package com.lanson.structuredStreaming.sink
import org.apache.spark.sql.streaming.StreamingQuery
import org.apache.spark.sql.{DataFrame, SparkSession}
/**
* 读取Socket数据,将数据写入到csv文件
*/
object FileSink {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().master("local")
.appName("File Sink")
.config("spark.sql.shuffle.partitions", 1)
.getOrCreate()
val result: DataFrame = spark.readStream
.format("socket")
.option("host", "node3")
.option("port", 9999)
.load()
val query: StreamingQuery = result.writeStream
.format("csv")
.option("path", "./dataresult/csvdir")
.option("checkpointLocation","./checkpint/dir3")
.start()
query.awaitTermination()
}
}
在socket中输入数据之后,每批次数据写入到一个csv文件中。
二、Memory Sink
memory Sink是将结果作为内存表存储在内存中,支持Append和Complete输出模式,这种结果写出到内存表方式多用于测试,如果数据量大要慎用。另外查询结果表中数据时需要写一个循环每隔一段时间读取内存中的数据。
Scala代码如下:
package com.lanson.structuredStreaming.sink
import org.apache.spark.sql.{DataFrame, SparkSession}
import org.apache.spark.sql.streaming.StreamingQuery
/**
* 读取scoket 数据写入memory 内存,再读取
*/
object MemorySink {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().master("local")
.appName("Memory Sink")
.config("spark.sql.shuffle.partitions", 1)
.getOrCreate()
spark.sparkContext.setLogLevel("Error")
val result: DataFrame = spark.readStream
.format("socket")
.option("host", "node3")
.option("port", 9999)
.load()
val query: StreamingQuery = result.writeStream
.format("memory")
.queryName("mytable")
.start()
//查询内存中表数据
while(true){
Thread.sleep(2000)
spark.sql(
"""
|select * from mytable
""".stripMargin).show()
}
query.awaitTermination()
}
}
三、Foreach Sink
foreach 可以对输出的结果数据进行自定义处理逻辑,针对结果数据自定义处理逻辑数据除了有foreach之外还有foreachbatch,两者区别是foreach是针对一条条的数据进行自定义处理,foreachbatch是针对当前小批次数据进行自定义处理。
1、foreachBatch
foreachBatch可以针对每个批次数据进行自定义处理,该方法需要传入一个函数,函数有2个参数,分别为当前批次数据对应的DataFrame和当前batchId。
案例:实时读取socket数据,将结果批量写入到mysql中。
Scala代码如下:
package com.lanson.structuredStreaming.sink
import org.apache.spark.sql.streaming.StreamingQuery
import org.apache.spark.sql.{DataFrame, SaveMode, SparkSession}
/**
* 读取Socket 数据,将数据写出到mysql中
*/
object ForeachBatchTest {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().appName("ForeachBatch Sink")
.master("local")
.config("spark.sql.shuffle.partitions", 1)
.getOrCreate()
import spark.implicits._
val df: DataFrame = spark.readStream
.format("socket")
.option("host", "node2")
.option("port", 9999)
.load()
val personDF: DataFrame = df.as[String].map(line => {
val arr: Array[String] = line.split(",")
(arr(0).toInt, arr(1), arr(2).toInt)
}).toDF("id", "name", "age")
val query: StreamingQuery = personDF.writeStream
.foreachBatch((batchDF: DataFrame, batchId: Long) => {
println("batchID : " + batchId)
batchDF.write.mode(SaveMode.Append).format("jdbc")
.option("url","jdbc:mysql://node3:3306/testdb?useSSL=false")
.option("user","root")
.option("password","123456")
.option("dbtable","person")
.save()
}).start()
query.awaitTermination();
}
}
运行结果:
Java代码如下:
package com.lanson.structuredStreaming.sink;
import java.util.concurrent.TimeoutException;
import org.apache.spark.api.java.function.MapFunction;
import org.apache.spark.api.java.function.VoidFunction2;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Encoders;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SaveMode;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.streaming.StreamingQueryException;
import scala.Tuple3;
public class ForeachBatchTest01 {
public static void main(String[] args) throws TimeoutException, StreamingQueryException {
SparkSession spark = SparkSession.builder().master("local")
.appName("ForeachBatchTest01")
.config("spark.sql.shuffle.partitions", 1)
.getOrCreate();
spark.sparkContext().setLogLevel("Error");
Dataset<Row> result = spark.readStream()
.format("socket")
.option("host", "node2")
.option("port", 9999)
.load()
.as(Encoders.STRING())
.map(new MapFunction<String, Tuple3<Integer, String, Integer>>() {
@Override
public Tuple3<Integer, String, Integer> call(String line) throws Exception {
String[] arr = line.split(",");
return new Tuple3<>(Integer.valueOf(arr[0]), arr[1], Integer.valueOf(arr[2]));
}
}, Encoders.tuple(Encoders.INT(), Encoders.STRING(), Encoders.INT()))
.toDF("id", "name", "age");
result.writeStream()
.foreachBatch(new VoidFunction2<Dataset<Row>, Long>() {
@Override
public void call(Dataset<Row> df, Long batchId) throws Exception {
System.out.println("batchID : "+batchId);
//将df 保存到mysql
df.write().format("jdbc")
.mode(SaveMode.Append)
.option("url","jdbc:mysql://node3:3306/testdb?useSSL=false" )
.option("user","root" )
.option("password","123456" )
.option("dbtable","person" )
.save();
}
}).start()
.awaitTermination();
}
}
运行结果:
在mysql中创建testdb库,并创建person表,这里也可以不创建表:
create database testdb;
create table person(id int(10),name varchar(255),age int(2));
1,zs,18
2,ls,19
3,ww,20
4,ml,21
5,tq,22
6,ll,29
mysql结果如下:
2、foreach
foreach可以针对数据结果每条数据进行处理。
案例:实时读取socket数据,将结果一条条写入到mysql中。
Scala代码如下:
package com.lanson.structuredStreaming.sink
import java.sql.{Connection, DriverManager, PreparedStatement}
import org.apache.spark.sql.execution.streaming.sources.ForeachWrite
import org.apache.spark.sql.{DataFrame, ForeachWriter, Row, SparkSession}
object ForeachSinkTest {
def main(args: Array[String]): Unit = {
val spark: SparkSession = SparkSession.builder().appName("ForeachBatch Sink")
.master("local")
.config("spark.sql.shuffle.partitions", 1)
.getOrCreate()
spark.sparkContext.setLogLevel("Error")
import spark.implicits._
val df: DataFrame = spark.readStream
.format("socket")
.option("host", "node2")
.option("port", 9999)
.load()
val personDF: DataFrame = df.as[String].map(line => {
val arr: Array[String] = line.split(",")
(arr(0).toInt, arr(1), arr(2).toInt)
}).toDF("id", "name", "age")
personDF.writeStream
.foreach(new ForeachWriter[Row]() {
var conn: Connection = _
var pst: PreparedStatement = _
//打开资源
override def open(partitionId: Long, epochId: Long): Boolean = {
conn = DriverManager.getConnection("jdbc:mysql://node3:3306/testdb?useSSL=false","root","123456")
pst = conn.prepareStatement("insert into person values (?,?,?)")
true
}
//一条条处理数据
override def process(row: Row): Unit = {
val id: Int = row.getInt(0)
val name: String = row.getString(1)
val age: Int = row.getInt(2)
pst.setInt(1,id)
pst.setString(2,name)
pst.setInt(3,age)
pst.executeUpdate()
}
//关闭释放资源
override def close(errorOrNull: Throwable): Unit = {
pst.close()
conn.close()
}
}).start()
.awaitTermination()
}
}
运行结果:
Java代码如下:
package com.lanson.structuredStreaming.sink;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.concurrent.TimeoutException;
import org.apache.spark.api.java.function.MapFunction;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Encoders;
import org.apache.spark.sql.ForeachWriter;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.streaming.StreamingQueryException;
import scala.Tuple3;
public class ForeachSinkTest01 {
public static void main(String[] args) throws TimeoutException, StreamingQueryException {
SparkSession spark = SparkSession.builder().master("local")
.appName("SSReadSocketData")
.config("spark.sql.shuffle.partitions", 1)
.getOrCreate();
spark.sparkContext().setLogLevel("Error");
Dataset<Row> result = spark.readStream()
.format("socket")
.option("host", "node2")
.option("port", 9999)
.load()
.as(Encoders.STRING())
.map(new MapFunction<String, Tuple3<Integer, String, Integer>>() {
@Override
public Tuple3<Integer, String, Integer> call(String line) throws Exception {
String[] arr = line.split(",");
return new Tuple3<>(Integer.valueOf(arr[0]), arr[1], Integer.valueOf(arr[2]));
}
}, Encoders.tuple(Encoders.INT(), Encoders.STRING(), Encoders.INT()))
.toDF("id", "name", "age");
result.writeStream()
.foreach(new ForeachWriter<Row>() {
Connection conn;
PreparedStatement pst ;
@Override
public boolean open(long partitionId, long epochId) {
try {
conn = DriverManager.getConnection("jdbc:mysql://node3:3306/testdb?useSSL=false", "root", "123456");
pst = conn.prepareStatement("insert into person values (?,?,?)");
} catch (SQLException e) {
e.printStackTrace();
}
return true;
}
@Override
public void process(Row row) {
int id = row.getInt(0);
String name = row.getString(1);
int age = row.getInt(2);
try {
pst.setInt(1,id );
pst.setString(2,name );
pst.setInt(3,age );
pst.executeUpdate();
} catch (SQLException e) {
e.printStackTrace();
}
}
@Override
public void close(Throwable errorOrNull) {
try {
pst.close();
conn.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}).start().awaitTermination();
}
}
运行
以上代码编写完成后,清空mysql person表数据,然后输入以下数据:
1,zs,18
2,ls,19
3,ww,20
4,ml,21
5,tq,22
6,ll,29
1,zs,18
2,ls,19
3,ww,20
4,ml,21
5,tq,22
6,ll,29
mysql结果如下:
- 📢博客主页:https://lansonli.blog.csdn.net
- 📢欢迎点赞 👍 收藏 ⭐留言 📝 如有错误敬请指正!
- 📢本文由 Lansonli 原创,首发于 CSDN博客🙉
- 📢停下休息的时候不要忘了别人还在奔跑,希望大家抓紧时间学习,全力奔赴更美好的生活✨
原文地址:https://blog.csdn.net/xiaoweite1/article/details/140733459
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!