我在 Microsoft Fabric 中使用 Spark Notebook。我想从 Lakehouse 中的元数据构建列映射。该映射应写入带有表列表的数据框中的“映射”列中。
我目前的尝试如下:
# Create initial list of table data
dataframe_tablelist = spark.createDataFrame(
[
("abcd", "AB", "t1"),
("efgh", "CD", "t2"),
("efgh", "CD", "t3"),
],
["database", "entity", "table_name"]
)
def construct_mapping(database, entity, table_name):
meta_name = "Metadata_" + database + "_" + entity + "_" + table_name
metadata = spark.sql(f"""select * from {meta_name}""")
# Here I would construct the mapping from the metadata
return meta_name
udf_constructor = udf(construct_mapping, StringType())
mapping_df = dataframe_tablelist.withColumn("test_column", udf_constructor(dataframe_tablelist.database, dataframe_tablelist.entity, dataframe_tablelist.table_name))
display(mapping_df)
我收到了这个我完全不明白的错误:
PicklingError: Could not serialize object: PySparkRuntimeError: [CONTEXT_ONLY_VALID_ON_DRIVER] It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.
我可能可以让它与 collect() 一起工作并逐行附加,但我想以“正确”的方式进行。