pyspark show dataframe as table with horizontal scroll in ipython notebook
Asked Answered
Z

12

45

a pyspark.sql.DataFrame displays messy with DataFrame.show() - lines wrap instead of a scroll.

enter image description here

but displays with pandas.DataFrame.head enter image description here

I tried these options

import IPython
IPython.auto_scroll_threshold = 9999

from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from IPython.display import display

but no luck. Although the scroll works when used within Atom editor with jupyter plugin:

enter image description here

Zohara answered 15/4, 2017 at 14:17 Comment(2)
Did you make any progress here?Remodel
I think this is what I did: limit few rows from spark dataframe, then on this "head" dataframe use spark_df_head.toPandas()Zohara
Z
34

this is a workaround

spark_df.limit(5).toPandas().head()

although, I do not know the computational burden of this query. I am thinking limit() is not expensive. corrections welcome.

Zohara answered 28/9, 2018 at 17:42 Comment(7)
agreed that this is not a great solution - but I have not seen a "native" (non-pandas) alternativeLutanist
Note that limit() do not keep the order of the dataframe.Czarevna
@LouisYang If the DataFrame is sorted, yes it does.Coggins
this solution is good only for small tables. other then that is slows the all processPercolation
The performance seems not to be an issue,based on this single test: Timing with the %%timeit magic: spark_df.limit(5).toPandas().head() - 9.6s spark_df.show(5) - 10.1s Running on JupyterLab 1.0.1, connected to a remote databricks's PySpark with DBconnect. spark_df is a medium sized dataframe.Thereon
What if you don't have Pandas installed and it isn't an option?Meritocracy
display(spark_df) will show the records in tabular format. to see all the columns / rows click on any one of the records, scroll bar will appear.Renae
T
39

Just add (and execute)

from IPython.core.display import HTML
display(HTML("<style>pre { white-space: pre !important; }</style>"))

And you'll get the df.show() with the scrollbar enter image description here

Tiertza answered 23/9, 2021 at 9:57 Comment(3)
super nice, thanks!Sev
This works great. Unfortunately it also has the drawback with adding horizontal scrolling to Markdown cells.Counterfoil
I wonder why this is not a default for spark/jupyter. are there any possible drawbacks?Underpart
Z
34

this is a workaround

spark_df.limit(5).toPandas().head()

although, I do not know the computational burden of this query. I am thinking limit() is not expensive. corrections welcome.

Zohara answered 28/9, 2018 at 17:42 Comment(7)
agreed that this is not a great solution - but I have not seen a "native" (non-pandas) alternativeLutanist
Note that limit() do not keep the order of the dataframe.Czarevna
@LouisYang If the DataFrame is sorted, yes it does.Coggins
this solution is good only for small tables. other then that is slows the all processPercolation
The performance seems not to be an issue,based on this single test: Timing with the %%timeit magic: spark_df.limit(5).toPandas().head() - 9.6s spark_df.show(5) - 10.1s Running on JupyterLab 1.0.1, connected to a remote databricks's PySpark with DBconnect. spark_df is a medium sized dataframe.Thereon
What if you don't have Pandas installed and it isn't an option?Meritocracy
display(spark_df) will show the records in tabular format. to see all the columns / rows click on any one of the records, scroll bar will appear.Renae
N
16

If anyone's still facing the issue, this could be resolved by tweaking website settings using developer tools.

When you do enter image description here

Open developer setting (F12). then inspect element (Windows: ctrl+shift+c, Mac: cmd+option+c). After this click (select) the dataframe output (shown in picture above). and uncheck whitespace attribute (see snapshot below) enter image description here

You just need to do this setting once. (unless you refresh the page)

This will show you the exact data natively as is. No need to convert to pandas.

Nidus answered 19/11, 2019 at 18:30 Comment(1)
This was perfect for a quick and dirty demo, thank you!! yes it breaks if reload, but perfect for a screencastMonanthous
R
10

Just edit the css file and you are good to go.

  1. Open the jupyter notebook ../site-packages/notebook/static/style/style.min.css file.

  2. Search for white-space: pre-wrap;, and remove it.

  3. Save the file and restart jupyter-notebook.

Problem fixed. :)

Rebato answered 19/2, 2020 at 8:34 Comment(0)
J
2

try display(dataframe_name) , it renders a scrollable table.

Junko answered 17/12, 2020 at 10:17 Comment(2)
this did not work in Jupyter notebook for me. It works in Databricks notebooks, but the question is for Jupyter notebooks.Zohara
Your answer has solved a very pertinent problem for me. I had been trying to download sample of data after performing some operations in Databricks and none of the answers on the internet seemed to work for me. Your answer creates a table whose sample of 100 records I can download. Thanks a ton.Selma
R
2

Try running this in its own cell:

%%html
<style>
div.jp-OutputArea-output pre {
    white-space: pre;
}
</style>

This is based on the solution posted by @MateoB27 (their code did not work for me though it was close).

Richia answered 25/1 at 19:38 Comment(0)
V
1

Adding to the answers given above by @karan-singla and @vijay-jangir, a handy one-liner to comment out the white-space: pre-wrap styling can be done like so:

$ awk -i inplace '/pre-wrap/ {$0="/*"$0"*/"}1' $(dirname `python -c "import notebook as nb;print(nb.__file__)"`)/static/style/style.min.css

This translates as; use awk to update inplace lines that contain pre-wrap to be surrounded by */ -- */ i.e. comment out, on the file found in styles.css found in your working Python environment.

This, in theory, can then be used as an alias if one uses multiple environments, say with Anaconda.

REFs:

Visakhapatnam answered 18/5, 2020 at 10:38 Comment(0)
P
1

This solution does not depend on pandas, it does not change the jupyter settings and it looks good (scrollbar will activate automatically).

import pyspark
from pyspark.sql import SparkSession

spark = SparkSession.builder.appName("My App").getOrCreate()
spark.conf.set("spark.sql.repl.eagerEval.enabled", True)

data = [
  [1, 1, 'A'],
  [2, 2, 'A'],
  [3, 3, 'A'],
  [4, 3, 'B'],
  [5, 4, 'B'],
  [6, 5, 'C'],
  [7, 6, 'C']]
df = spark.sparkContext.parallelize(data).toDF(('column_1', 'column_2', 'column_3'))

# This will print a pretty table
df
Posh answered 5/11, 2021 at 14:18 Comment(0)
C
1

What worked for me since im using an environment i dont have access to css files and wanted to do it in a cell using jupyter magic commands got a neat solution.

Found the solution at https://mcmap.net/q/374170/-how-to-disable-line-wrapping-in-jupyter-notebook-output-cells

Just paste in a cell:

%%html
<style>
div.output_area pre {
    white-space: pre;
}

works also in scala notebooks

Caucasus answered 8/6, 2022 at 21:10 Comment(0)
L
0

To be precise for what someone said before. In the file anaconda3/lib/python3.7/site- packages/notebook/static/style/style.min.css there are 2 white-space: nowrap; you have to comment the one here in this way samp { /*white-space: nowrap;*/ } save it and the restart jupyter

Loydloydie answered 5/1, 2021 at 16:9 Comment(0)
L
0

I would create a small function to convert PySpark Dataframe to Pandas Dataframe and then pick head and call it like this

Function

def display_df(df):
    return df.limit(5).toPandas().head()

Then call

display_df(spark_df)

You do have to have pandas imported

import pandas as pd
Level answered 14/11, 2022 at 13:21 Comment(0)
S
-1

I created below li'l function and it works fine:

def printDf(sprkDF): 
    newdf = sprkDF.toPandas()
    from IPython.display import display, HTML
    return HTML(newdf.to_html())

you can use it straight on your spark queries if you like, or on any spark data frame:

printDf(spark.sql('''
select * from employee
'''))
Stockbroker answered 20/6, 2017 at 17:21 Comment(1)
but pyspark.sql.DataFrame().toPandas().head() works just fine without needing your html conversion (see question) ... and one wouldn't want to convert a big dataframe to pandas ... work around is to convert head to pandasZohara

© 2022 - 2024 — McMap. All rights reserved.