@jedaisoulOct 20.2014 — #Search bots start from the home page (usually index.html or a variant thereof) and can record every web page, image etc. linked to from that. They then read each "do follow" page that was linked, and record the items linked to from that, so on. This includes "do follow" links from other web sites. So any file that is linked to from any public web page is potentially visible. However, they cannot read the directory of the folders on the web site. Hence files that are not linked to on any public web page are hidden.
As far as I'm aware, hidden files are not harmful in-and-of themselves.
@kiwistechOct 20.2014 — #It is very easy for Google to crawl your website, when you submit your website to the search engine you submit the content and not the design. so Crawler crawls content even it is hidden. So it is strictly prohibited to create hidden content or doorway pages. I have seen people getting bang because of this illegal activity.
@mindstreakOct 20.2014 — #Google algorithm are designed in such a way that it can detect the hidden contents present for a particular website. That is why hidden contents comes under black hat and google penalizes those websites following those techniques.
@shumicpsOct 21.2014 — #There is no problem for Google to read out the hidden content as it reads the site backend code and can see very easily that you are using hidden text which totally forbidden.