How to partition and write DataFrame in Spark without deleting partitions with no new data?

This is an old topic, but I was having the same problem and found another solution, just set your partition overwrite mode to dynamic by using:

spark.conf.set('spark.sql.sources.partitionOverwriteMode', 'dynamic')

So, my spark session is configured like this:

spark = SparkSession.builder.appName('AppName').getOrCreate()
spark.conf.set('spark.sql.sources.partitionOverwriteMode', 'dynamic')

Leave a Comment