Hi there, am interested to try your trick in Wikipedia, but am also not sure of how should I do tht, coz i read some posts saying tht “Please note that Wikipedia hates spams, so don’t spam them; if you do, they can block your IP and/or website URL, check their blocking policy and if they blacklist you, you can be sure that Google may know about it.”
The first relates to internal link structure. I’ve made the mistake you say you’ve seen so often. I have a primary keyword and have used that keyword in the main navigation, linked to a page optimized for that keyword. But I’ve also got a bunch of contextual links in posts pointing to that page, usually with the keyword in the anchor text. I now understand that those internal links aren’t helping much, at least from an SEO perspective. Am I better to remove that keyword and direct link from the menu and simply link the page from multiple posts and pages within the site. Or will I get better results leaving it in the main menu and changing the contextual links in the posts to point to a related page with a different keyword?
You should build a website to benefit your users, and any optimization should be geared toward making the user experience better. One of those users is a search engine, which helps other users discover your content. Search Engine Optimization is about helping search engines understand and present content. Your site may be smaller or larger than our example site and offer vastly different content, but the optimization topics we discuss below should apply to sites of all sizes and types. We hope our guide gives you some fresh ideas on how to improve your website, and we'd love to hear your questions, feedback, and success stories in the Google Webmaster Help Forum1.

Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.


I completely agree that defintion of a target audience isa great first step, but would ask if adding in competitors to the analysis (mentioned here as a later step) helps draw out who your target audience would be via comparisons, i.e. showing who you are an who you are not - would be very interested to hear opinions on how this tactic can be used within the overall step in coordination with targeted keyword discovery.
×