我从 pyspark.sql 开始,尝试使用 jupyter-notebook 读取一个简单的 csv 文件。请参阅下面的代码
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.getOrCreate()
data_path = '//Users/myuser/pysparktest/'
utilization_path = data_path + '/utilization.csv'
user_df = spark.read.csv(utilization_path)
但是我收到以下无法解决的错误:
24/06/05 23:14:32 WARN FileStreamSink: Assume no metadata directory. Error while looking for metadata directory in the path: //Users/myuser/pysparktest/utilization.csv.
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "null"
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3443)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466)
at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:53)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:229)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:211)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:538)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
at java.lang.Thread.run(Thread.java:750)
有人能帮我弄清楚这里缺少什么吗?
谢谢,
我尝试按照本教程安装本机 hadoop 库:
https://medium.com/@GalarnykMichael/install-spark-on-mac-pyspark-453f395f240b#.be80dcqat。
多次尝试卸载和安装spark、pyspark和jupyter。
期待:
能够读取简单的 csv 文件。
首先,您的路径字符串以两个斜杠开头。然后,您连接 2 个字符串,其中一个以斜杠结尾,第二个以斜杠开头,这会导致您的路径,
//Users/myuser/pysparktest//utilization.csv
尽管回溯表示它将路径视为//Users/myuser/pysparktest/utilization.csv
.但路径应该是这样的:/Users/myuser/pysparktest/utilization.csv