Hadoop3:MapReduce的序列化和反序列化
一、概念
1、序列化
就是把内存中的对象,转换成字节序列 (或其他数据传输协议)以便于存储到磁
盘(持久化)和网络传输。
2、反序列化
就是将收到字节序列(或其他数据传输协议)或者是磁盘的持久化数据,转换
成内存中的对象。
也就是说,服务器间传输对象(Bean)必须要序列化才能传输。
二、实现Bean对象序列化的7步骤
1、必须实现Writable
接口。
2、反序列化时,需要反射调用空参构造函数,所以必须有空参构造
3、重写序列化方法
4、重写反序列化方法
5、注意反序列化的顺序和序列化的顺序完全一致,就类似于管道,先进先出,后进后出
6、要重写toString()
7、如果需要将自定义的Bean
放在key
中传输,则还需要实现Comparable
接口,因为MapReduce
框中的Shuffle
过程要求对key
必须能排序。
三、案例
1、需求分析
原数据
113736230513192.196.100.1www.atguigu.com248124681200
213846544121192.196.100.22640200
3 13956435636192.196.100.31321512200
4 13966251146192.168.100.12400404
5 18271575951192.168.100.2www.atguigu.com15272106200
6 84188413192.168.100.3www.atguigu.com41161432200
7 13590439668192.168.100.41116954200
8 15910133277192.168.100.5www.hao123.com31562936200
9 13729199489192.168.100.62400200
10 13630577991192.168.100.7www.shouhu.com6960690200
11 15043685818192.168.100.8www.baidu.com36593538200
12 15959002129192.168.100.9www.atguigu.com1938180500
13 13560439638192.168.100.109184938200
14 13470253144192.168.100.11180180200
15 13682846555192.168.100.12www.qq.com19382910200
16 13992314666192.168.100.13www.gaga.com30083720200
17 13509468723192.168.100.14www.qinghua.com7335110349404
18 18390173782192.168.100.15www.sogou.com95312412200
19 13975057813192.168.100.16www.baidu.com1105848243200
20 13768778790192.168.100.17120120200
21 13568436656192.168.100.18www.alibaba.com248124681200
22 13568436656192.168.100.191116954200
统计每一个手机号耗费的总上行流量、总下行流量、总流量
2、代码实现
FlowBean.java
主要是实现Writable
接口,然后,重写接口方法,注意序列化的顺序和反序列化的顺序要保持一致
。
先进先出,后进后出
package com.atguigu.mapreduce.writable;
import org.apache.hadoop.io.Writable;
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
/**
* 1、定义类实现writable接口
* 2、重写序列化和反序列化方法
* 3、重写空参构造
* 4、toString方法
*/
public class FlowBean implements Writable {
private long upFlow; // 上行流量
private long downFlow; // 下行流量
private long sumFlow; // 总流量
// 空参构造
public FlowBean() {
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow(long sumFlow) {
this.sumFlow = sumFlow;
}
public void setSumFlow() {
this.sumFlow = this.upFlow + this.downFlow;
}
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
@Override
public void readFields(DataInput in) throws IOException {
this.upFlow = in.readLong();
this.downFlow = in.readLong();
this.sumFlow = in.readLong();
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow;
}
}
FlowMapper.java
mapper
阶段,按行读取文本内容,只取上行流量和下行流量字段值。
package com.atguigu.mapreduce.writable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
import java.io.IOException;
public class FlowMapper extends Mapper<LongWritable, Text, Text, FlowBean> {
private Text outK = new Text();
private FlowBean outV = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
// 1 获取一行
// 113736230513192.196.100.1www.atguigu.com248124681200
String line = value.toString();
// 2 切割
// 1,13736230513,192.196.100.1,www.atguigu.com,2481,24681,200 7 - 3= 4
// 213846544121192.196.100.22640200 6 - 3 = 3
String[] split = line.split("\t");
// 3 抓取想要的数据
// 手机号:13736230513
// 上行流量和下行流量:2481,24681
String phone = split[1];
String up = split[split.length - 3];
String down = split[split.length - 2];
// 4封装
outK.set(phone);
outV.setUpFlow(Long.parseLong(up));
outV.setDownFlow(Long.parseLong(down));
outV.setSumFlow();
// 5 写出
context.write(outK, outV);
}
}
FlowReducer.java
package com.atguigu.mapreduce.writable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
import java.io.IOException;
public class FlowReducer extends Reducer<Text, FlowBean,Text, FlowBean> {
private FlowBean outV = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
// 1 遍历集合累加值
long totalUp = 0;
long totaldown = 0;
for (FlowBean value : values) {
totalUp += value.getUpFlow();
totaldown += value.getDownFlow();
}
// 2 封装outk, outv
outV.setUpFlow(totalUp);
outV.setDownFlow(totaldown);
outV.setSumFlow();
// 3 写出
context.write(key, outV);
}
}
FlowDriver.java
这块基本是固定写法
package com.atguigu.mapreduce.writable;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
public class FlowDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
// 1 获取job
Configuration conf = new Configuration();
Job job = Job.getInstance(conf);
// 2 设置jar
job.setJarByClass(FlowDriver.class);
// 3 关联mapper 和Reducer
job.setMapperClass(FlowMapper.class);
job.setReducerClass(FlowReducer.class);
// 4 设置mapper 输出的key和value类型
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(FlowBean.class);
// 5 设置最终数据输出的key和value类型
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(FlowBean.class);
// 6 设置数据的输入路径和输出路径
FileInputFormat.setInputPaths(job, new Path("D:\\input\\inputflow"));
FileOutputFormat.setOutputPath(job, new Path("D:\\hadoop\\output4"));
// 7 提交job
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 : 1);
}
}
3、测试
成功生成输出文件
生成的文件结果和预期一样。
四、总结
这个案例中,我们对MapReduce
又有了更清晰的认识
map阶段
,和我们普通认识的map数据结构
有所不同,普通的map类会把相同的key覆盖
而这里的map阶段
,并不会如此
它生生成的数据结构应该是这样的
<key,list[val1,val2...]>
map阶段
会把相同key
对应的value对象
,组装成一个list结构
,存入map数据结构
中,供reduce
阶段遍历处理。
原文地址:https://blog.csdn.net/Brave_heart4pzj/article/details/139342945
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!