I have a solution for my first question, using the following function
def results_to_df(results):
columns = [
col['Label']
for col in results['ResultSet']['ResultSetMetadata']['ColumnInfo']
]
listed_results = []
for res in results['ResultSet']['Rows'][1:]:
values = []
for field in res['Data']:
try:
values.append(list(field.values())[0])
except:
values.append(list(' '))
listed_results.append(
dict(zip(columns, values))
)
return listed_results
and then:
t = results_to_df(response)
pd.DataFrame(t)
As for my 2nd question and to the request of @EricBellet I'm also adding my approach for pagination which I find as inefficient and longer in compare to loading the results from Athena output in S3:
def run_query(query, database, s3_output):
'''
Function for executing Athena queries and return the query ID
'''
client = boto3.client('athena')
response = client.start_query_execution(
QueryString=query,
QueryExecutionContext={
'Database': database
},
ResultConfiguration={
'OutputLocation': s3_output,
}
)
print('Execution ID: ' + response['QueryExecutionId'])
return response
def format_result(results):
'''
This function format the results toward append in the needed format.
'''
columns = [
col['Label']
for col in results['ResultSet']['ResultSetMetadata']['ColumnInfo']
]
formatted_results = []
for result in results['ResultSet']['Rows'][0:]:
values = []
for field in result['Data']:
try:
values.append(list(field.values())[0])
except:
values.append(list(' '))
formatted_results.append(
dict(zip(columns, values))
)
return formatted_results
res = run_query(query_2, database, s3_ouput) #query Athena
import sys
import boto3
marker = None
formatted_results = []
query_id = res['QueryExecutionId']
i = 0
start_time = time.time()
while True:
paginator = client.get_paginator('get_query_results')
response_iterator = paginator.paginate(
QueryExecutionId=query_id,
PaginationConfig={
'MaxItems': 1000,
'PageSize': 1000,
'StartingToken': marker})
for page in response_iterator:
i = i + 1
format_page = format_result(page)
if i == 1:
formatted_results = pd.DataFrame(format_page)
elif i > 1:
formatted_results = formatted_results.append(pd.DataFrame(format_page))
try:
marker = page['NextToken']
except KeyError:
break
print ("My program took", time.time() - start_time, "to run")
It's not formatted so good but I think it does the job...
2021 Update
Today I'm using custom wrapping for aws-data-wrangler as the best solution for the original question I asked several years ago.
import awswrangler as wr
def run_athena_query(query, database, s3_output, boto3_session=None, categories=None, chunksize=None, ctas_approach=None, profile=None, workgroup='myTeamName', region_name='us-east-1', keep_files=False, max_cache_seconds=0):
"""
An end 2 end Athena query method, based on the AWS Wrangler package.
The method will execute a query and will return a pandas dataframe as an output.
you can read more in https://aws-data-wrangler.readthedocs.io/en/stable/stubs/awswrangler.athena.read_sql_query.html
Args:
- query: SQL query.
- database (str): AWS Glue/Athena database name - It is only the original database from where the query will be launched. You can still using and mixing several databases writing the full table name within the sql (e.g. database.table).
- ctas_approach (bool): Wraps the query using a CTAS, and read the resulted parquet data on S3. If false, read the regular CSV on S3.
- categories (List[str], optional): List of columns names that should be returned as pandas.Categorical. Recommended for memory restricted environments.
- chunksize (Union[int, bool], optional): If passed will split the data in a Iterable of DataFrames (Memory friendly). If True wrangler will iterate on the data by files in the most efficient way without guarantee of chunksize. If an INTEGER is passed Wrangler will iterate on the data by number of rows igual the received INTEGER.
- s3_output (str, optional): Amazon S3 path.
- workgroup (str, optional): Athena workgroup.
- keep_files (bool): Should Wrangler delete or keep the staging files produced by Athena? default is False
- profile (str, optional): aws account profile. if boto3_session profile will be ignored.
- boto3_session (boto3.Session(), optional): Boto3 Session. The default boto3 session will be used if boto3_session receive None. if profilename is provided a session will automatically be created.
- max_cache_seconds (int): Wrangler can look up in Athena’s history if this query has been run before. If so, and its completion time is less than max_cache_seconds before now, wrangler skips query execution and just returns the same results as last time. If reading cached data fails for any reason, execution falls back to the usual query run path. by default is = 0
Returns:
- Pandas DataFrame
"""
# test for boto3 session and profile.
if ((boto3_session == None) & (profile != None)):
boto3_session = boto3.Session(profile_name=profile, region_name=region_name)
print("Querying AWS Athena...")
try:
# Retrieving the data from Amazon Athena
athena_results_df = wr.athena.read_sql_query(
query,
database=database,
boto3_session=boto3_session,
categories=categories,
chunksize=chunksize,
ctas_approach=ctas_approach,
s3_output=s3_output,
workgroup=workgroup,
keep_files=keep_files,
max_cache_seconds=max_cache_seconds
)
print("Query completed, data retrieved successfully!")
except Exception as e:
print(f"Something went wrong... the error is:{e}")
raise Exception(e)
return athena_results_df
you can read more here
DataFrame.from_dict()
,DataFrame.from_records()
,pandas.read_json()
. There are others too, but again it is difficult to say with certainty which to use without knowing the structure of the data. Also, it may benefit you to review the documentation forget_query_results()
. Maybe it takes parameter(s), meaning default of 1000 rows can be increased. – Keary{temperature=41.1}
– Elsieresponse = client.get_query_results(QueryExecutionId=res['QueryExecutionId'], MaxResults=2000)
and see if you get 2000 rows this time. Also, it might be reasonable to presume that there is an upper limit to the number of rows that can be returned via a single request (although I can't find any mention of it in the documentation). If there is an upper limit, all you would need to do it is parse the JSON in response for'NextToken'
key, and include it the next time you callclient.get_query_results()
and you would effectively be getting the next 1000 (or whatever the limit is) rows. – Kearyget_query_results()
returns a Python dictionary, so try d = response['ResultSet']['Rows'], then df = pd.DataFrame.from_dict(d). However, you might not get expected DataFrame ifd
contains metadata (stuff that you don't want in the final DataFrame). If this is the case, you may need to extract from/mutated
(with a for loop or some other logic) so it contains what you want. This link may help: pandas.pydata.org/pandas-docs/stable/generated/… – Keary