Today we have been writing again. Another 2.000 words Page that we’ll publish tomorrow (It is about Kanban Methodology).

Before talking about the famous robots.txt file we must mention this:

  • Writing +2.000 words Posts/Pages (our Posts are 500 words on average but our Pages are 4 times longer) is more demanding than it seems at first glance (specially if English is not your mother tongue).


In case you don’t know, in this section we talk about How we are building this Website: our SEO strategy, Theme and Plugins we use, our Real Results (Pageviews and Users we have)…


Being said that, we’ll divide this Post into 2 parts:

1. What is robots.txt file and how can you create one.

2. Today’s Google Analytics report.


Let’s begin:

1. What is robots.txt file and how can you create one.

As soon as you think about publishing your Website, you find out that there is a file called “robots.txt” that tells Google whether it should Crawl certain pages or not from your site.


What is this robots.txt file?

  • It is a simple txt file that contains simple instructions about which content of your website should Google or other systems avoid Crawling.


How does robots.txt work?

Google support has an impressive helpful page about this file (check it out) but essentially its structure is as follow:

  • You put first the Crawling system to whom the instruction is directed by means of the line:
    • User-agent: C_Example
  • This C_Example is usually:
    • All Crawling systems: *
    • Google Bot: Googlebot
    • etc.
  • You then Disallow or Allow this Crawling system from Crawling the directories you choose.
  • /” means all the directories. For example:
    • User-agent: Googlebot
    • Allow: /
    • User-agent:*
    • Disallow: /

With this last instruction you would be allowing just Google to Crawl all your directories and pages.The rest of them would be “banned” from Crawling your content.

  • You also have an interesting instruction, Sitemap:
    • Sitemap: your_website_sitemap.xml
    • It tells Google to Crawl your Sitemap each time it inspects your “robots.txt” file

* Again, if you want more information, look this Google support page: How creating a robots.txt file


Be careful! This doesn’t mean that that content is not Indexed!

  • If they don’t Crawl that content, it won’t improve its Ranking, and all the SEO measures (for that content) would be completely useless.
  • Google improves your Ranking by Crawling your content, checking its “value”, “reading” it…
    • If you Disallow Google from Crawling it, Google won’t be able to assign it the proper Ranking through its Search Engine. It will be a “black box” to Google.
  • But that content can be found on Internet.


Then, which content is usually being Disallowed from being Crawled?

  • Useless content, like “wp-admin” directories, “wp-login” pages…
  • Pages empty of interesting content.


Where should you place this robots.txt file?

  • On your Root directory:
    • public_html/


Which is our real robots.txt file?

We wrote our robots.txt for ourselves, making it simple.

We just have:

User-agent: *
Disallow: /wp-login.php


We included the “wp-login.php” line because we have experienced several Error 5xx on Google Search Console in login redirecting pages, so we blocked all of them with this line.

That is all.


2. Today’s Google Analytics report.


As you may have already appreciated at the Post’s top image, yesterday we had 7 real users (11 – 4 users generated by the Google Crawling system since we sent 2 publications to Google Search Console; 1 Page and 1 Post = 4 users created by Google. 2 for each content).



Yesterday we had 13 unique pageviews.

Our Website has been 100% active for: 2 months and 2 days.

If this data seems “poor” to you, you are completely right: 13 pagevies in a day is nothing… but if you check our first Posts (2 weeks ago) you will appreciate a remarkable improvement (we had lots of “depressing” days with 0 pageviews).


Remember: we have started Consuunt knowing nothing about SEO nor Website building (we know about Business).

  • In this Diary, we share all we are learning with you in a transparent way so you can learn from our mistakes and successes.
    • Moreover we were tired of reading super-successful SEO professionals (supposedly) that don’t share their first weeks, when had not experience at all

It won’t be easy, but we’ll increase our page views together.


Thanks for being there!

Tomorrow, more.


© 2023 - Consuunt. Privacy Policy


We're not around right now. But you can send us an email and we'll get back to you, asap.


Log in with your credentials

Forgot your details?