Tagged: Google Toggle Comment Threads | Keyboard Shortcuts

  • sakthisankara 12:38 pm on April 10, 2017 Permalink | Reply
    Tags: Google, Google Webmaster, Index your URL in Google, Search Engine Land, Search Engine Submission, Submit URL to Google   

    Now Google Allows You to Submit URL to Index its Search Results 

    Search Engine Giant Google Now Allows you to Submit URLs to Indexing to its Search Results. Thus, it is always taking its most-used solutions and integrating them directly into the core search engine experience. Following along the same lines, it is now adding the capability to submit website URLs for indexing directly into Google’s search results page.

    First spotted by Search Engine Land, the Mountain View-based tech giant has enhanced the search experience for when your query is exactly this — ‘submit URL to Google’ or something along the same lines such as ‘URL submission Google.’ This displays the dedicated URL submission box, obviously with the reCAPTHCA box, right within the search results.

    Submit Your URL Directly

    There is currently no official word on the same from Google, so we’re not completely sure for how long the feature has been available in the search results. Its appearance in the search results has been discovered just recently. The process is traditional and is known to SEO (search engine optimisation) experts/newbies.

    You just have to copy and paste the link you want Google to index in the text box and then press the Submit button after proving your humanity by checking the ‘I’m not a robot’ box. But, as you must already be aware that the submission of a link doesn’t mean that Big G will surely index your page or shown in page rankings. All submissions are reviewed before inclusion in the search engine’s index.

    This feature is steadily being rolled out globally, so it may or not be available for you right this instant. But, the URL submission form will definitely make its way to your Google search results in the coming weeks. This further simplifies the process of submitting pages to the search engine for indexing — which can usually take from four days to four weeks. We contacted Google for more information and will update you once we hear back.

    As for those unaware of indexing, it is simply the process of adding web pages to Google’s massive storage space for displaying search results. This is usually done automatically by web crawlers, which are programs which sift through your website’s code to read “meta” tags and index the same. WordPress pages and links are added to the index automatically.


  • sakthisankara 4:04 am on January 30, 2017 Permalink | Reply
    Tags: Google, , latest on page seo tips, , On page SEO Checklist 2017, , , , SEO Checklist for 2017   

    Latest On Page SEO Checklist for 2017 

    The Updated and advanced On Page SEO Checklist in 2017 Will help and boost your website presence,visibility,ranking,and conversion on major search engines such as Google,Bing..etc On page SEO is the one of the major focus in Search Engine optimization and also it is very key concepts of to optimize the websites as best one for major search engines.

    Latest On Page SEO Checklist for 2017

    No.of On Page SEO Checklist Points For 2017

    1 Use the SSL – HTTPS Instead of HTTP
    2 Title Tags, Meta Description, Meta Keywords
    3 Check Site is Stored in Cache?
    4 keyword density & Keyword in Body
    5 Check all Images are Optimized?
    6 Competitors Site Search & backlinks
    7 Check keywords are used in – URL’s of inner pages
    8 Check URL Structure / URL re-writing
    9 Give Internal Links/Anchor Text to the keywords
    10 Page Load Speed Under 2 Seconds
    11 Check 301 redirection
    12 Validate HTML and CSS
    13 Check 302 redirection
    14 Check 404 redirection
    15 Create Google analytics account
    16 Generate Google analytics Tracking code
    17 Create Google webmaster account
    18 Verify your website to Google & Bing webmasters
    19 Generate Robots.txt & upload on Site
    20 Submit Robots.txt to webmasters
    21 Create & Upload the XML & HTML Sitemap on Server / on site
    22 For WordPress – Use Yoast / All in SEO Plugin
    23 Check it on different browser or Browser compatibility
    24 Proper Use of H Tags Including H1
    25 Create Social Networking pages & Profiles
    26 Check W3C Validity
    27 underscores in the url
    28 Check Mobile rendering
    29 meta viewport tags are present?
    30 Check Site is Responsive or Not?
    31 Is Your Website User Friendly?
    32 watch the User Reviews / Blog Comments
    33 Use Limited Reciprocal Links
    34 Use Videos Whenever Possible
    35 Build the Backlinks from High Quality .edu and .gov Sites
    36 No Duplicate Content is allowed
    37 Links from Google News Articles
    38 Check Domain Age of website
    39 Paramalinks/ Internal links working properly?
    40 User Generated – Spammy Content on site?
    41 Use the Updated Content on Site.
    42 Use Favicon icon
    43 Use Schema.org – Structured data
    44 og tags for linkedin
    45 og tags for google+
    46 og tags for twitter
    47 og tags for facebook.com
    48 google maps account
    49 geo position
    50 geo.placename
    51 geo.region
    52 Copyright
    53 Author
    54 Language
    55 rss feeds
    56 Check IP Canonicalization
    58 404 page presence

    To use this latest 2017’s on page SEO Checklist and make your business website visible and get ranked well as soon as possible.

    Source Link

  • sakthisankara 1:25 am on November 26, 2016 Permalink | Reply
    Tags: Crawler, Google, Google bots, Google Crawler, Robots.txt, Spider   

    Fun with robots.txt 

    One of the most boring topics in technical SEO is robots.txt. Rarely is there an interesting problem needing to be solved in the file, and most errors come from not understanding the directives or from typos. The general purpose of a robots.txt file is simply to suggest to crawlers where they can and cannot go.


    Basic parts of the robots.txt file

    • User-agent — specifies which robot.
    • Disallow — suggests the robots not crawl this area.
    • Allow — allows robots to crawl this area.
    • Crawl-delay — tells robots to wait a certain number of seconds before continuing the crawl.
    • Sitemap — specifies the sitemap location.
    • Noindex — tells Google to remove pages from the index.
    • # — comments out a line so it will not be read.
    • * — match any text.
    • $ — the URL must end here.

    Other things you should know about robots.txt

    • Robots.txt must be in the main folder, i.e., domain.com/robots.txt.
    • Each subdomain needs its own robots.txt — http://www.domain.com/robots.txt is not the same as domain.com/robots.txt.
    • Crawlers can ignore robots.txt.
    • URLs and the robots.txt file are case-sensitive.
    • Disallow simply suggests crawlers not go to a location. Many people use this to try to de-index pages, but it won’t work. If someone links to a page externally, it will still be shown in the SERPs.
    • Crawl-delay is not honored by Google, but you can manage crawl settings in Google Search Console.
    • Allow CSS and JS, according to Google’s Gary Illyes:

    User-Agent: Googlebot
    Allow: .js
    Allow: .css

    • Validate your robots.txt file in Google Search Console and Bing Webmaster Tools.
    • Noindex will work, according to Eric Enge of Stone Temple Consulting, but Google Webmaster Trends Analyst John Mueller recommends against using it. It’s better to noindex via meta robots or x-robots.
    • Don’t block crawling to avoid duplicate content. Read more about how Google consolidates signals around duplicate content.
    • Don’t disallow pages which are redirected. The spiders won’t be able to follow the redirect.
    • Disallowing pages prevents previous versions from being shown in archive.org.
    • You can search archive.org for older versions of robots.txt — just type in the URL, i.e., domain.com/robots.txt.
    • The max size for a robots.txt file is 500 KB.
    • Now for the fun stuff!

      Many companies have done creative things with their robots.txt files. Take a look at the following examples!

      ASCII art and job openings

      Nike.com has a nice take on their slogan inside their robots.txt, “just crawl it” but they also included their logo.

      Nike robots.txt with ASCII artSeer also uses art and has a recruitment message.

      Seer robots.txt with ASCII art and job postingTripAdvisor has a recruitment message right in the robots.txt file.

      TripAdvisor job posting inside robots.txt

      Fun robots

      Yelp likes to remind the robots that Asimov’s Three Laws are in effect.

      Yelp Asimov's Three Laws in robots.txtAs does last.fm.

      last.fm Asimov's Three Laws in robots.txtAccording to YouTube, we already lost the war to robots.

      YouTube war on robots in robots.txtPage One Power has a nice “Star Wars” reference in their robots.txt.

      Page One Power Star Wars in robots.txtGoogle wants to make sure Larry Page and Sergey Brin are safe from Terminators in their killer-robots.txt file.

      Google Terminator reference in killer-robots.txtWho can ignore the front page of the internet? Reddit references Bender from “Futurama” and Gort from “The Day The Earth Stood Still.”

      Reddit Bender and Gort references in robots.txt


      Humans.txt describes themselves as “an initiative for knowing the people behind a website. It’s a TXT file that contains information about the different people who have contributed to building the website.” I was surprised to see this more often than I would have thought when I tried on a few domains. Check out https://www.google.com/humans.txt.

      Just using robots.txt to mess with people at this point

      One of my favorite examples is from Oliver Mason, who disallows everything and bids his blog farewell, only to then allow every individual file again farther down in the file. As he comments at the bottom, he knows this is a bad idea. (Don’t just read the robots.txt here, seriously, go read this guy’s whole website.)

      On my personal website, I have a robots.txt file to mess with people as well. The file validates fine, even though at first glance it would look like I’m blocking all crawlers.

      StoxSEO.com robots.txt BOMThe reason is that I saved the file with a BOM (byte order mark) character at the beginning, which makes my first line invalid — as you can see when I go to verify in Google Search Console. With the first line invalid, the Disallow has no User-Agent reference, so it is also invalid.

      StoxSEO Google Search Console BOM

      Indexed pages that shouldn’t exist

      If you search for “World’s Greatest SEO,” you’ll find a page on Matt Cutts’ website that doesn’t actually exist. SEO Mofo chose a directory (/files) that is blocked by https://www.mattcutts.com/robots.txt. The only information Google has about this page is from the links that were built to the non-existent page. While the page 404s, Google still shows it in the search results with the anchor text from the links.

      World's Greatest SEO SERPs

      A whole freaking website inside robots.txt

      Thought up by Alec Bertram, this amazing feat is chronicled where else but his robots.txt file. He has the how, the source and even a menu to guide you.

      This was also used on vinna.cc to embed an entire game into the file. Head over to https://vinna.cc/robots.txt and play Robots Robots Revolution!


  • sakthisankara 12:17 am on September 16, 2016 Permalink | Reply
    Tags: , , , , Google, Rank brain, , ,   

    What is Google Rankbrain algorithm? How does it work? 

    All SEO Webmasters are aware of the fact that Google uses some complex algorithms for ranking a website. These are Panda, Penguin, Pigeon, Hummingbird etc. These algorithms determine how a webpage ranks on the resultant page. In October 2015 Google introduced a new intelligent algorithm that works typically different from other algorithms. It’s called Google Rankbrain. It’s a machine learning technology. Machine learning technology means a computer teaches itself how to do something. Before this algorithm, Google Engineers used a high-class mathematical process for determining a result, but that is past now. Now Rankbrain, the Artificial Intelligence (AI) system does the big job. According to Google, Rankbrain uses the best signals like keywords, page speed, mobile-friendliness, backlinks, anchor text etc for determining a result that appears in SERP. Users now search something compiling with multiple words or long tail keywords. Rankbrain has that ability to perceive that multiple words and how they are related to each other. It searches the similar meaning and outputs the suitable result in a great way.

    rank brain.jpg

    How it affects SEO:

    Always keep in mind that what a user wants the search result to be is wanted by Google also.SEO is going tough gradually. It’s not those old days when only link building was the best process to rank high in search engine. After some big revolutions in algorithm update by google, those processes are gone.

    Keywords are too important for ranking. Natural contents with super information that quenches the thirst of the users are the most important factor right now. We don’t know what are the keywords any user uses for a particular result. Users like natural language and proper result what they search for. So we have to put content considering this matter. User satisfaction is the most important factor.

  • sakthisankara 4:40 am on April 3, 2016 Permalink | Reply
    Tags: Content marketing, Google, Latest SEO Tips 2016, Mobile SEO, , , seo techniques, , Technical Seo   

    Latest seo tips 2016 

    SEO is a dynamic industry and 2015 was no exception. Having said that couple of things you should focus in 2016 are:

    1. Mobile SEO

    Mobile has been a prime focus for Google and we have seen that in a recent mobile-friendly update and app indexing. Looking at the latest annual mobility report (gathered from around 100 carriers globally) published from Ericsson, there will be 6.1B smartphone users globally and 80 percent of mobile data traffic will be from smartphones by the end of 2020.

    2. Technical SEO

    Google updated its technical guidelines about a year ago to allow Googlebot to crawl your site’s CSS and/or JavaScript files. Their objective is to ensure Googlebots are crawling websites correctly and in the most effective manner possible.


    3. Google’s real time Penguin update

    Google’s next penguin update which is supposed to update in real time moving forward. So ensure you are not doing any link building activities that are violating Google guidelines.

    4. High quality content is still king

    Did you read Google’s recent 160 page content assessment guide? This clearly shows how Google assesses the quality of a content from a user experience standpoint. So make sure the content that you write meets the optimum content quality.

    5. Fast-loading pages using the AMP format may get ranking boost

    Relatively new but you should definitely look into this. Google recently said it will begin sending search traffic to AMP pages beginning in late February. So that’s one major change you can expect in 2016. Wondering why Google is focusing on AMP? Because AMP pages load four times faster and use eight times less data than traditional mobile-optimized pages. For more about AMP, see the AMP Project site.

    Source:- https://www.quora.com/What-are-your-favourite-latest-SEO-tips-for-2016

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc