How to convert epoch to datetime redshift?
Asked Answered
M

6

50

I work in dbeaver. I have a table x.

TABLE x has a column "timestamp"

1464800406459 
1464800400452 
1464800414056 
1464800422854 
1464800411797

The result I want:

Wed, 01 Jun 2016 17:00:06.459 GMT
Wed, 01 Jun 2016 17:00:00.452 GMT
Wed, 01 Jun 2016 17:00:14.056 GMT
Wed, 01 Jun 2016 17:00:22.854 GMT 
Wed, 01 Jun 2016 17:00:11.797 GMT 

I tried redshift query

SELECT FROM_UNIXTIME(x.timestamp) as x_date_time 
FROM x

but didn't work.

Error occurred:

Invalid operation: function from_unixtime(character varying) does not exist

I also tried

SELECT DATE_FORMAT(x.timestamp, '%d/%m/%Y') as x_date 
FROM x

Error occurred:

Invalid operation: function date_format(character varying, "unknown") does not exist

Is there any wrong with the syntax? Or is there another way to convert to human readable date and time?

Mesitylene answered 2/10, 2016 at 8:59 Comment(0)
L
109

Redshift doesn't have the from_unixtime() function. You'll need to use the below SQL query to get the timestamp. It just adds the number of seconds to epoch and return as timestamp.

select timestamp 'epoch' + your_timestamp_column * interval '1 second' AS your_column_alias
from your_table
Levulose answered 2/10, 2016 at 15:6 Comment(7)
I just realize that your_timestamp_column is string, and it has 13 digit. So I added your syntax as: select timestamp 'epoch' + CAST(your_timestamp_column AS BIGINT)/1000 * interval '1 second' AS your_column_alias from your_table. Thank you.Mesitylene
how to add timezone? because there is no GMT behind the time.Mesitylene
I'm not sure if we can get the GMT behid time as this is only timestamp field. you can try select timestamp with timezone instead of timestamp.Levulose
@KhusnaNadia 's answer is valid because the format of OP's string is 13 digit. Ideally you want your string to be of 10 digits, so accordingly change the /1000 in their answer to number of digits >10 in your string format. (eg: if your string is 10 digits, the /1000 is not needed, if your string is 15 digits, you would need /100000 etc.)Ladylike
This solution drops ms data. Is there a way to keep this information? For example, even using /1000.0, the result is truncated to seconds.Differential
@Differential treat your Unix timestamp column as a float: select timestamp 'epoch' + your_timestamp_column::float * interval '1 second' …Chkalov
I did that by / 1000.0, effectively.Differential
A
38

UDF is going to be pretty slow. Checked execution time for 3 solutions and 1k rows.

The slowest -

-- using UDF from one of the answers
SELECT from_unixtime(column_with_time_in_ms/ 1000)
FROM table_name LIMIT 1000;

Execution time: 00:00:02.348062s

2nd best -

SELECT date_add('ms',column_with_time_in_ms,'1970-01-01')
FROM table_name LIMIT 1000;

Execution time: 00:00:01.112831s

And the fastest -

SELECT TIMESTAMP 'epoch' + column_with_time_in_ms/1000 *INTERVAL '1 second'
FROM table_name LIMIT 1000;

Execution time: 00:00:00.095102s


Execution time calculated from stl_query -

SELECT *
       ,endtime - starttime
FROM stl_query
WHERE querytxt ilike('%table_name%limit%')
ORDER BY starttime DESC;
Announcement answered 19/7, 2017 at 18:7 Comment(3)
Redshift doesn't have from_unixtime or date_add. The only suggestion on here that actually works on Redshift is a duplicate of one of the answers above (the fastest one). Also, you don't need to divide by 1000 if the time is in ms. Thanks for sharing how you did the execution times, though.Paderna
@Paderna - You will have to define the the from_unixtime() UDF from the answer by @aloschilov as mentioned in the SQL comment in my code snippet. Since the column I was working with stores epoch in milliseconds, I DO need to divide by 1000 to get the epoch in seconds. If your column stores it in seconds, then you need not divide.Announcement
Save my day! Thanks!Insistence
C
19

The simplest solution is to create from_unixtime() function:

CREATE OR REPLACE FUNCTION from_unixtime(epoch BIGINT)
  RETURNS TIMESTAMP  AS
'import datetime

return datetime.datetime.fromtimestamp(epoch)
'
LANGUAGE plpythonu IMMUTABLE;

See Redshift documentation on UDF for details

Coenurus answered 30/1, 2017 at 21:32 Comment(3)
This is fantastic.Townscape
Nice but user defined functions cannot be used in materialized viewsMise
@Mise Materialized views were introduced on Nov 29, 2019. On the moment of the answer those did not exist. I bet those could be added in the future.Coenurus
H
1

For quick reference, here is the SQL UDF implementation of the from_unixtime function shown above in Python. I've not tested the performance but I imagine it would be similar to the plain SQL version. It's a whole lot easier to write though.

Note: this calculates the number of seconds from the epoch.

CREATE FUNCTION from_unixtime (BIGINT)
  RETURNS TIMESTAMP WITHOUT TIME ZONE
IMMUTABLE
as $$
  SELECT TIMESTAMP 'epoch' + $1 / 1000 * interval '1 second'
$$ LANGUAGE sql;
Henotheism answered 4/7, 2020 at 23:20 Comment(0)
W
0

I used it like this

CAST(DATEADD(S, CONVERT(int,LEFT(column_name, 10)), '1970-01-01')as timestamp) as column_name

SELECT 
    ,task_id 
    ,CAST(DATEADD(S, CONVERT(int,LEFT(SLA, 10)), '1970-01-01')as timestamp) as SLA
FROM my_schema.my_task_table ;
Whist answered 1/9, 2020 at 0:11 Comment(0)
D
0
SELECT TIMESTAMP 'epoch' + cast(cloumn_name as bigint) * INTERVAL '1 Second ' 
  AS cloumn_name
Discomfit answered 12/7, 2023 at 17:18 Comment(1)
Add some context to your answer explain your thought process to the requester. Its always good idea to provide explanation to your answersSextuplicate

© 2022 - 2024 — McMap. All rights reserved.