spark读取hive和mysql的数据
读取hive数据
本质上:SparkSQL访问了Metastore服务获取了Hive元数据,基于元数据提供的地址进行计算
启动以下服务:
start-dfs.sh
start-yarn.sh
mapred --daemon start historyserver
/opt/installs/spark/sbin/start-history-server.sh
hive-server-manager.sh start metastore
修改配置文件
cd /opt/installs/spark/conf
新增:hive-site.xml
vi hive-site.xml
在这个文件中,编写如下配置:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://bigdata01:9083</value>
</property>
</configuration>
接着将该文件进行分发:
xsync.sh hive-site.xml
import os
from pyspark.sql import SparkSession
if __name__ == '__main__':
# 配置环境
os.environ['JAVA_HOME'] = 'E:/java-configuration/jdk-8'
# 配置Hadoop的路径,就是前面解压的那个路径
os.environ['HADOOP_HOME'] = 'E:/applications/bigdata_config/hadoop-3.3.1/hadoop-3.3.1'
# 配置base环境Python解析器的路径
os.environ['PYSPARK_PYTHON'] = 'C:/Users/35741/miniconda3/python.exe'
# 配置base环境Python解析器的路径
os.environ['PYSPARK_DRIVER_PYTHON'] = 'C:/Users/35741/miniconda3/python.exe'
os.environ['HADOOP_USER_NAME'] = 'root'
spark = SparkSession.builder \
.master("local[2]") \
.appName("第一个sparksql案例") \
.config("spark.sql.warehouse.dir", 'hdfs://shucang:9820/user/hive/warehouse') \
.config('hive.metastore.uris', 'thrift://shucang:9083') \
.config("spark.sql.shuffle.partitions",2) \
.enableHiveSupport() \
.getOrCreate()
spark.sql("select * from yhdb01.sql2_1").createOrReplaceTempView("sql2_1")
spark.sql("select * from sql2_1").show()
spark.stop()
spark读取mysql表数据
java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver
在pyspark中放入mysql的驱动包
windows
找到工程中pyspark库包所在的环境,将驱动包放入环境所在的jars目录中
C:\Users\35741\miniconda3\Lib\site-packages\pyspark\jars
linux
cd /opt/installs/anaconda3/lib/python3.8/site-packages/pyspark/jars
需要在所有节点的pyspark下
import os
from pyspark.sql import SparkSession
if __name__ == '__main__':
os.environ['JAVA_HOME'] = 'E:/java-configuration/jdk-8'
# 配置Hadoop的路径,就是前面解压的那个路径
os.environ['HADOOP_HOME'] = 'E:/applications/bigdata_config/hadoop-3.3.1/hadoop-3.3.1'
# 配置base环境Python解析器的路径
os.environ['PYSPARK_PYTHON'] = 'C:/Users/35741/miniconda3/python.exe'
# 配置base环境Python解析器的路径
os.environ['PYSPARK_DRIVER_PYTHON'] = 'C:/Users/35741/miniconda3/python.exe'
spark = SparkSession.builder.master("local[2]").appName("spark案例").config("spark.sql.shuffle.partitions",2).getOrCreate()
# 方式一:spark.read.jdbc
dictUsername = {"user": "root", "password": "root"}
empDf = spark.read.jdbc(url="jdbc:mysql://localhost:3306/mydb01",table="emp",properties=dictUsername)
empDf.createOrReplaceTempView("emp")
spark.sql("""
select * from emp
""").show()
# 方式二:spark.read.format 最后需要 load一下
empDf2 = spark.read.format("jdbc") \
.option("driver", "com.mysql.cj.jdbc.Driver") \
.option("url", "jdbc:mysql://localhost:3306/mydb01") \
.option("dbtable", "emp") \
.option("user","root") \
.option("password","root").load()
empDf2.createOrReplaceTempView("emp2")
spark.sql("""
select * from emp
""").show()
原文地址:https://blog.csdn.net/weixin_52642840/article/details/144458092
免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!