How to make sure web crawler works for site hosted on AWS S3 and uses AJAX
Asked Answered
R

1

6

Google webmaster guide explains that web server should handle requests for url that contains _escaped_fragment_ (The crawler modifies www.example.com/ajax.html#!mystate to www.example.com/ajax.html?_escaped_fragment_=mystate)

http://support.google.com/webmasters/bin/answer.py?hl=en&answer=174992

My site is located on AWS S3 and I have no web server to handle such requests. How can I make sure the crawler gets feed and my site gets index?

Refuse answered 9/10, 2012 at 12:38 Comment(0)
G
0

S3 hosted sites are static html. No POST handling, no PHP renders, no nothing... So, Why do you care about Google indexing AJAX sites?

For a static website, simply upload well formed robots.txt and sitemap.xml files to your root path.

Guan answered 23/5, 2014 at 12:37 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.