How to use collect() In Python
Collect is used to collect the data from the dataframe, we will use a comprehension data structure to get pyspark dataframe column to list with collect() method.
Syntax: [data[0] for data in dataframe.select(‘column_name’).collect()]
Where,
- dataframe is the pyspark dataframe
- data is the iterator of the dataframe column
- column_name is the column in the dataframe
Example: Python code to convert dataframe columns to list using collect() method
Python3
# display college column in # the list format using comprehension print ([data[ 0 ] for data in dataframe. select( 'college' ).collect()]) # display student ID column in the # list format using comprehension print ([data[ 0 ] for data in dataframe. select( 'student ID' ).collect()]) # display subject1 column in the list # format using comprehension print ([data[ 0 ] for data in dataframe. select( 'subject1' ).collect()]) # display subject2 column in the # list format using comprehension print ([data[ 0 ] for data in dataframe. select( 'subject2' ).collect()]) |
Output:
['vignan', 'vvit', 'vvit', 'vignan', 'vignan', 'iit'] ['1', '2', '3', '4', '1', '5'] [67, 78, 100, 78, 89, 94] [89, 89, 80, 80, 98, 98]
Converting a PySpark DataFrame Column to a Python List
In this article, we will discuss how to convert Pyspark dataframe column to a Python list.
Creating dataframe for demonstration:
Python3
# importing module import pyspark # importing sparksession from pyspark.sql module from pyspark.sql import SparkSession # creating sparksession and giving an app name spark = SparkSession.builder.appName( 'sparkdf' ).getOrCreate() # list of students data data = [[ "1" , "sravan" , "vignan" , 67 , 89 ], [ "2" , "ojaswi" , "vvit" , 78 , 89 ], [ "3" , "rohith" , "vvit" , 100 , 80 ], [ "4" , "sridevi" , "vignan" , 78 , 80 ], [ "1" , "sravan" , "vignan" , 89 , 98 ], [ "5" , "gnanesh" , "iit" , 94 , 98 ]] # specify column names columns = [ 'student ID' , 'student NAME' , 'college' , 'subject1' , 'subject2' ] # creating a dataframe from the lists of data dataframe = spark.createDataFrame(data, columns) # display dataframe dataframe.show() |
Output: