Create AWS Athena view programmatically
M

8

29

Can you create views in Amazon Athena? outlines how to create a view using the User Interface.

I'd like to create an AWS Athena View programatically, ideally using Terraform (which calls CloudFormation).

I followed the steps outlined here: https://ujjwalbhardwaj.me/post/create-virtual-views-with-aws-glue-and-query-them-using-athena, however I run into an issue with this in that the view goes stale quickly.

...._view' is stale; it must be re-created.

The terraform code looks like this:

resource "aws_glue_catalog_table" "adobe_session_view" {

  database_name = "${var.database_name}"
  name = "session_view"

  table_type = "VIRTUAL_VIEW"
  view_original_text = "/* Presto View: ${base64encode(data.template_file.query_file.rendered)} */"
  view_expanded_text = "/* Presto View */"

  parameters = {
    presto_view = "true"
    comment = "Presto View"
  }

  storage_descriptor {
    ser_de_info {
      name = "ParquetHiveSerDe"
      serialization_library = "org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe"
    }

    columns { name = "first_column" type = "string" }
    columns { name = "second_column" type = "int" }
    ...
    columns { name = "nth_column" type = "string" }
}

An alternative I'd be happy to use is the AWS CLI, however aws athena [option] provides no option for this.

I've tried:

  • create-named-query which I have not been able to get working for a statement such as CREATE OR REPLACE VIEW as this doesn't seem to be the intended use case for this command.
  • start-query-execution which asks for an output location, which suggests that this is meant for querying the data and outputting the results, as opposed to making stateful changes/creations. It also seems to be paired with stop-query-execution.
Mccabe answered 24/5, 2019 at 9:4 Comment(0)
R
16

As you suggested, it is definitely possible to create an Athena view programmatically via the AWS CLI using the start-query-execution. As you pointed out, this does require you to provide an S3 location for the results even though you won't need to check the file (Athena will put an empty txt file in the location for some reason).

Here is an example:

$ aws athena start-query-execution --query-string "create view my_view as select * from my_table" --result-configuration "OutputLocation=s3://my-bucket/tmp" --query-execution-context "Database=my_database"

{
    "QueryExecutionId": "1744ed2b-e111-4a91-80ea-bcb1eb1c9c25"
}

You can avoid having the client specify a bucket by creating a workgroup and setting the location there.

You can check whether your view creation was successful by using the get-query-execution command.

$ aws --region athena get-query-execution --query-execution-id bedf3eba-55b0-42de-9a7f-7c0ba71c6d9b
{
    "QueryExecution": {
        "QueryExecutionId": "1744ed2b-e111-4a91-80ea-bcb1eb1c9c25",
        "Query": "create view my_view as select * from my_table",
        "StatementType": "DDL",
        "ResultConfiguration": {
            "OutputLocation": "s3://my-bucket/tmp/1744ed2b-e111-4a91-80ea-bcb1eb1c9c25.txt"
        },
        "Status": {
            "State": "SUCCEEDED",
            "SubmissionDateTime": 1558744806.679,
            "CompletionDateTime": 1558744807.312
        },
        "Statistics": {
            "EngineExecutionTimeInMillis": 548,
            "DataScannedInBytes": 0
        },
        "WorkGroup": "primary"
    }
}

Rosanarosane answered 25/5, 2019 at 0:54 Comment(3)
Thank you JD! This is perfect! Worked like a charm, great documentation :DMccabe
With the Query result location set on my work group I was able to replace --result-configuration with --work-group.Mccabe
Can this be done via Athena API?Thant
G
39

Creating views programmatically in Athena is not documented, and unsupported, but possible. What happens behind the scenes when you create a view using StartQueryExecution is that Athena lets Presto create the view and then extracts Presto's internal representation and puts it in the Glue catalog.

The staleness problem usually comes from the columns in the Presto metadata and the Glue metadata being out of sync. An Athena view really contains three descriptions of the view: the view SQL, the columns and their types in Glue format, and the columns and types in Presto format. If either of these get out of sync you will get the "… is stale; it must be re-created." error.

These are the requirements on a Glue table to work as an Athena view:

  • TableType must be VIRTUAL_VIEW
  • Parameters must contain presto_view: true
  • TableInput.ViewOriginalText must contain an encoded Presto view (see below)
  • StorageDescriptor.SerdeInfo must be an empty map
  • StorageDescriptor.Columns must contain all the columns that the view defines, with their types

The tricky part is the encoded Presto view. That structure is created by this code: https://github.com/prestosql/presto/blob/27a1b0e304be841055b461e2c00490dae4e30a4e/presto-hive/src/main/java/io/prestosql/plugin/hive/HiveUtil.java#L597-L600, and this is more or less what it does:

  • Adds a prefix /* Presto View: (with a space after :)
  • Adds a base 64 encoded JSON string that contains the view SQL, the columns and their types, and some catalog metadata (see below)
  • Adds a suffix */ (with a space before *)

The JSON that describes the view looks like this:

  • A catalog property that must have the value awsdatacatalog.
  • A schema property that must be the name of the database where the view is created (i.e. it must match the DatabaseName property of the surrounding Glue structure.
  • A list of columns, each with a name and type
  • A originalSql property with the actual view SQL (not including CREATE VIEW …, it should start with SELECT … or WITH …)

Here's an example:

{
  "catalog": "awsdatacatalog",
  "schema": "some_database",
  "columns": [
    {"name": "col1", "type": "varchar"},
    {"name": "col2", "type": "bigint"}
  ],
  "originalSql": "SELECT col1, col2 FROM some_other_table"
}

One caveat here is that the types of the columns are almost, but not quite, the same as the names in Glue. If Athena/Glue would have string the value in this JSON must be varchar. If the Athena/Glue uses array<string> the value in this JSON must be array(varchar), and struct<foo:int> becomes row(foo int).

This is pretty messy, and putting it all together requires some fiddling and testing. The easiest way to get it working is to create a few views and decoding working the instructions above backwards to see how they look, and then try doing it yourself.

Grained answered 28/5, 2019 at 17:25 Comment(6)
To add to this: struct column data types in Athena need to be mapped to row in the Presto definition JSON e.g. type = "struct<col1:string>" in the Terraform/Glue definition maps to "type": "row(col1 varchar)" in the Presto view definition.Kaule
@NathanGriffiths right you are, I had written struct instead of row, now fixed in my answer.Grained
Just as a couple of additional notes from my learnings when implementing this answer which hopefully will help others. Columns in all 3 representations of the table must be in the same order (stale view otherwise). Columns must be cast in the originalSQL to match that denoted in the presto columns. (stale view). I also misread the answer and thought Presto would add the prefix and base64 encode my JSON for me, but that's not the case. originalText = addPrefixSuffix(base64(JSON.stringify(exampleObjectabove)))Adolphus
It's surprising that there's still not a better way of handling this. I want to version control the content of my view and then put it into a CF template, but that seems overly complex at this point. If there's some trick to doing this nicely in a CF template I'd be interested, but I haven't found anything yet.Sedgemoor
I have packaged this answer as a Terraform module that you can find here: github.com/iconara/terraform-aws-athena-viewGrained
Beautiful - I did this "in reverse" so I didn't have to construct a base64 encoded string: Created a view in athena, then used boto3 to update the "table". See my gist: gist.github.com/ecmonsen/e1e3e6906def081a5fbf43ad3a653171Lollop
R
16

As you suggested, it is definitely possible to create an Athena view programmatically via the AWS CLI using the start-query-execution. As you pointed out, this does require you to provide an S3 location for the results even though you won't need to check the file (Athena will put an empty txt file in the location for some reason).

Here is an example:

$ aws athena start-query-execution --query-string "create view my_view as select * from my_table" --result-configuration "OutputLocation=s3://my-bucket/tmp" --query-execution-context "Database=my_database"

{
    "QueryExecutionId": "1744ed2b-e111-4a91-80ea-bcb1eb1c9c25"
}

You can avoid having the client specify a bucket by creating a workgroup and setting the location there.

You can check whether your view creation was successful by using the get-query-execution command.

$ aws --region athena get-query-execution --query-execution-id bedf3eba-55b0-42de-9a7f-7c0ba71c6d9b
{
    "QueryExecution": {
        "QueryExecutionId": "1744ed2b-e111-4a91-80ea-bcb1eb1c9c25",
        "Query": "create view my_view as select * from my_table",
        "StatementType": "DDL",
        "ResultConfiguration": {
            "OutputLocation": "s3://my-bucket/tmp/1744ed2b-e111-4a91-80ea-bcb1eb1c9c25.txt"
        },
        "Status": {
            "State": "SUCCEEDED",
            "SubmissionDateTime": 1558744806.679,
            "CompletionDateTime": 1558744807.312
        },
        "Statistics": {
            "EngineExecutionTimeInMillis": 548,
            "DataScannedInBytes": 0
        },
        "WorkGroup": "primary"
    }
}

Rosanarosane answered 25/5, 2019 at 0:54 Comment(3)
Thank you JD! This is perfect! Worked like a charm, great documentation :DMccabe
With the Query result location set on my work group I was able to replace --result-configuration with --work-group.Mccabe
Can this be done via Athena API?Thant
S
9

Updating the above examples for Terraform 0.12+ syntax, and adding in reading the view queries from the filesystem:

resource "null_resource" "athena_views" {
  for_each = {
    for filename in fileset("${path.module}/athenaviews/", "**"):
           replace(filename,"/","_") => file("${path.module}/athenaviews/${filename}")
  }

  provisioner "local-exec" {
    command = <<EOF
    aws athena start-query-execution \
      --output json \
      --query-string CREATE OR REPLACE VIEW ${each.key} AS ${each.value} \
      --query-execution-context "Database=${var.athena_database}" \
      --result-configuration "OutputLocation=s3://${aws_s3_bucket.my-bucket.bucket}"
EOF
  }

  provisioner "local-exec" {
    when    = "destroy"
    command = <<EOF
    aws athena start-query-execution \
      --output json \
      --query-string DROP VIEW IF EXISTS ${each.key} \
      --query-execution-context "Database=${var.athena_database}" \
      --result-configuration "OutputLocation=s3://${aws_s3_bucket.my-bucket.bucket}"
EOF
  }
}

Note also then when= "destroy" block to ensure the views are dropped when your stack is torn down.

Place text files with a SELECT query below your module path under a directory (athenaview/ in this example), and it will pick them up and create views. This will create views named subfolder_filename, and destroy them if the files are removed.

Shanley answered 14/10, 2019 at 4:12 Comment(6)
Thank you @Joshua Samuel Great addition! I believe we've added a good bit of documentation in this area.Mccabe
I like this approach, but : Error: Error running command 'aws athena start-query-execution --query-string "CREATE OR REPLACE VIEW Query1 AS SELECT ac, region FROM meta.getresources" --output json --query-execution-context "Database=meta_resources" --result-configuration "OutputLocation=s3://query-log" ': exit status 255. Output: usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] aws help Unknown options: REPLACE, VIEW, Query1, AS, SELECT, ac,, region, FROM, meta.getresources", OR ... however, if I copy the SQL from CMD output, it runs in my SQL ClientAlburga
@Alburga you have to wrap --query-string parameter value in quotes, e.g. --query-string 'CREATE OR REPLACE VIEW...' but even better it is to make AWS CLI load source files, instead of loading them in Terraform: --query-string file://${each.value}Pedalfer
@MaciejMajewski Yes I had it wrapped, have done with double and single quotes, same error. Also loaded entire 'Create' statement from file. WHat version are you on ? Terraform v0.12.20Alburga
@Alburga I am on Terraform v0.12.21. Hard to say, with file:// it works well for usPedalfer
Working for me but only first time it created. when i deleted the view from athena and tried to recreate it did not. is there issue with null_resource? is it works only one time?Siliqua
P
6

Based on previous answers, here is an example that will execute queries only if source file has changed. Also instead pasting SQL query into command, it uses file:// adapter to pass it to AWS CLI command.

resource "null_resource" "views" {
  for_each = {
    for filename in fileset("${var.sql_files_dir}/", "**/*.sql") :
    replace(replace(filename, "/", "_"), ".sql", "") => "${var.sql_files_dir}/${filename}"
  }

  triggers = {
    md5 = filemd5(each.value)

    # External references from destroy provisioners are not allowed -
    # they may only reference attributes of the related resource.
    database_name = var.database_name
    s3_bucket_query_output = var.s3_bucket_query_output
  }

  provisioner "local-exec" {
    command = <<EOF
      aws athena start-query-execution \
        --output json \
        --query-string file://${each.value} \
        --query-execution-context "Database=${var.database_name}" \
        --result-configuration "OutputLocation=s3://${var.s3_bucket_query_output}"
EOF
  }

  provisioner "local-exec" {
    when    = destroy
    command = <<EOF
      aws athena start-query-execution \
        --output json \
        --query-string 'DROP VIEW IF EXISTS ${each.key}' \
        --query-execution-context "Database=${self.triggers.database_name}" \
        --result-configuration "OutputLocation=s3://${self.triggers.s3_bucket_query_output}"
EOF
  }
}

To make destroy work correct, name files exactly like filename - example.sql relates to query:

CREATE OR REPLACE VIEW example AS ...
Pedalfer answered 24/2, 2020 at 14:46 Comment(0)
F
3

Addition to Theo's answer: In the base64 encoded JSON file, the type "string" is not valid when defining the cloumn attributes! Always write "varchar" at this point.

edit: Also "int" must be declared as "integer"!

I went with the solution by Theo and it worked using AWS Cloud Formation Templates.

I just wanted to add a little hint, that can save you hours of debugging. I am not writing this as a comment, because I don't have rights to comment yet. Feel free to copy&paste this into the comment section of Theo's answer.

Fluxmeter answered 6/8, 2019 at 8:53 Comment(2)
No problem! Glad Theo's very details answer helped!Mccabe
I've fixed my answer so that it says varchar in the appropriate place.Grained
M
2

To add to the answers by JD D and Theo, working with their solutions, we have figured out how to invoke the AWS Cli via terraform in the following:

resource "null_resource" "athena_view" {

  provisioner "local-exec" {
    command = <<EOF
aws sts assume-role \
  --output json \
  --region my_region \
  --role-arn arn:aws:iam::${var.account_number}:role/my_role \
  --role-session-name create_my_view > /tmp/credentials.json

export AWS_SESSION_TOKEN=$(jq -r '.Credentials.SessionToken' /tmp/credentials.json)
export AWS_ACCESS_KEY_ID=$(jq -r '.Credentials.AccessKeyId' /tmp/credentials.json)
export AWS_SECRET_ACCESS_KEY=$(jq -r '.Credentials.SecretAccessKey' /tmp/credentials.json)

aws athena start-query-execution \
  --output json \
  --region my_region \
  --query-string "CREATE OR REPLACE VIEW my_view AS SELECT * FROM my_table \
  --query-execution-context "Database=${var.database_name}" \
  --result-configuration "OutputLocation=s3://${aws_s3_bucket.my-bucket.bucket}"
EOF
  }
}

We use null_resource ... to run provisioners that aren't directly associated with a specific resource.

The result of aws sts assume-role is outputted as JSON into /tmp/credentials.json.

jq is used to parse the necessary fields out of the output of aws sts assume-role .

aws athena start-query-execution is then able to execute under the role specified by the environment variables defined.

Instead of --result-configuration "OutputLocation=s3://...., --work-group can be specified, NOTE that this is a separate flag on start-query-execution, not part of the --result-configuration string.

Mccabe answered 29/5, 2019 at 16:49 Comment(0)
P
0

Actually, the query for the view can be created using Athena named queries. For example:

resource "aws_athena_named_query" "my_named_query" {
  name      = "test-view"
  workgroup = "someworkgroupname"
  database  = "somedbname"
  query     = "CREATE OR REPLACE VIEW \"new_view\" AS SELECT field1, field2 FROM \"somedbname\".\"sometablename\""
}

One example of using the new terraform_data resource (terraform v1.4.x) could be:

resource "terraform_data" "crete_athena_view" {
  triggers_replace = [
    aws_athena_named_query.my_named_query.id
  ]

  provisioner "local-exec" {
    command = "aws athena start-query-execution --query-string \"${aws_athena_named_query.my_named_query.query}\" --work-group ${aws_athena_named_query.my_named_query.workgroup} --query-execution-context Database=${aws_athena_named_query.my_named_query.database},Catalog=AwsDataCatalog"
  }
}

For older terraform versions, null_resource can be used.

Pitfall answered 25/4, 2023 at 15:4 Comment(0)
F
0

Another piece of code using AWS CDK Python in the context of a merge on read lakehouse usecase

def build_mor_view(scope: Construct, db_name: str, table_name: str, col_definitions: list[list[str]]) -> CfnTable:
"""
This function builds a virtual view for a given table which Merge on Read non compacted data according to the 
generation id.
For each column except timestamp and resourceId, the view retrieve the lastest non null available value by genId for
each unique tuples of timestamp and resourceId.
File must be formated roughly as follow /%lake_location%/%table_name%/%earliestId%-%genId%.parquet
CDK code is inspired from https://theglitchblog.com/2024/03/01/create-aws-athena-view-using-aws-cdk/

:param scope: CDK Construct
:param db_name: Database name
:param table_name: Table name
:param col_definitions: List of column definitions (name, type)
:return: CfnTable view
"""

columns = [{"name": col_def[0], "type": hive_type_of(col_def[1])} for col_def in col_definitions]

sql = "SELECT \"timestamp\",\"resourceId\""
for col_def in col_definitions:
    col_name = f"\"{col_def[0]}\""
    col_type = hive_type_of(col_def[1])

    if col_name not in ["\"timestamp\"", "\"resourceId\""]:
        sql += f",\nmax_by({col_name},-CAST(regexp_extract(\"$path\", '-(\d+)', 0) AS {col_type})) {col_name}"

sql += f" FROM \"{db_name}\".\"{table_name}\" GROUP BY \"timestamp\", \"resourceId\""

athena_json = {
    "originalSql": sql,
    "catalog": "awsdatacatalog",
    "schema": db_name,
    "columns": columns
}

type_casted_col_list = [{"name": item['name'],
                         "type": "string" if item['type'] == "varchar" else "float" if item['type'] == "real" else
                         item['type']}
                        for item in athena_json['columns']]

b64_en_view_config = (base64.b64encode((json.dumps(athena_json)).encode('utf-8'))).decode('utf-8')

view_name = f"{table_name}_mor"
# noinspection PyTypeChecker
return CfnTable(scope, view_name,
                catalog_id=Aws.ACCOUNT_ID,
                database_name=db_name,
                table_input=CfnTable.TableInputProperty(
                    name=view_name,
                    parameters={"presto_view": "true", "comment": "Presto View"},
                    table_type="VIRTUAL_VIEW",
                    storage_descriptor=CfnTable.StorageDescriptorProperty(
                        columns=type_casted_col_list,
                    ),
                    view_original_text=f"/* Presto View: {b64_en_view_config} */",
                    view_expanded_text="/* Presto View */"
                )
                )

def hive_type_of(col_type: str) -> str:
    if col_type in ["long", "LongType"]:
        return "bigint"
    elif col_type in ["float", "FloatType"]:
        return "float"
    elif col_type in ["double", "DoubleType"]:
        return "double"
    elif col_type in ["Instant", "TimestampType"]:
        return "timestamp"
    else:
        return col_type
Findley answered 25/7 at 17:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.