In 1998, two graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[22] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer.
A quick search for “SEO ranking factors” will give you all of these answers and myriad others. There is a lot of information out there. And the reality is, while there are likely hundreds of variables working together to determine final placement, much of what is suggested is guesswork. And certainly, not all ranking factors are relevant to every business.
Attempting to replace a dead link with your own is easily and routinely identified as spam by the Wikipedia community, which expects dead links to be replaced to equivalent links at archive.org. Persistent attempts will quickly get your account blocked, and your webiste can be blacklisted (the Wikipedia blacklist is public, and there is evidence that Google uses it to determine rankings), which will have negative SEO consequences.
I understand that some SEO agencies and departments are not built for the big SEO campaigns. Strategic work takes time, and speeding (or scaling) through the development stage will likely do more harm than good. It's like cramming for a test — you're going to miss information that's necessary for a good grade. It would be my pleasure if this post inspired some change in your departments.
He is the co-founder of NP Digital. The Wall Street Journal calls him a top influencer on the web, Forbes says he is one of the top 10 marketers, and Entrepreneur Magazine says he created one of the 100 most brilliant companies. Neil is a New York Times bestselling author and was recognized as a top 100 entrepreneur under the age of 30 by President Obama and a top 100 entrepreneur under the age of 35 by the United Nations.
Really its just a matter of getting creative - grab a cup of caffeine and think for a minute about what resources you have to try to get some insight on your visitors (or target markets) and their needs before you dive in.  Think about how much time it might take you (or what the cost of the reports would be if you are going to buy some market research reports), and tack that onto your billing as an optional service.
Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.
In 1998, two graduate students at Stanford University, Larry Page and Sergey Brin, developed "Backrub", a search engine that relied on a mathematical algorithm to rate the prominence of web pages. The number calculated by the algorithm, PageRank, is a function of the quantity and strength of inbound links.[22] PageRank estimates the likelihood that a given page will be reached by a web user who randomly surfs the web, and follows links from one page to another. In effect, this means that some links are stronger than others, as a higher PageRank page is more likely to be reached by the random web surfer.

Page and Brin founded Google in 1998.[23] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[24] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[25]
Sorry for the long comment, I just am really happy to see that after all those years of struggle you finally made a break through and you definitely deserve it bro. I’ve had my own struggles as well and just reading this got me a little emotional because I know what it feels like to never wanting to give up on your dreams and always having faith that one day your time will come. It’s all a matter of patience and learning from failures until you get enough experience to become someone who can generate traffic and bring value to readers to sustain long term relationships.
Yep and sometimes it’s just being a little creative. I’ve started a little blog on seo/wordpress just for fun actually… no great content on it like here though… but because the competition is so tough in these niches I decided to take another approach. I created a few WordPress plugins that users can download for free from wordpress.org… and of course these link to my site so this gets me visitors each day.
Hey Brian, love your site + content. Really awesome stuff! I have a question about dead link building on Wikipedia. I actually got a “user talk” message from someone moderating a Wikipedia page I replaced a dead link on. They claimed that “Wikipedia uses nofollow tags” so “additions of links to Wikipedia will not alter search engine rankings.” Any thoughts here?
It’s free to be active in online groups and on websites that are relevant to your business and community—and it helps you to obtain more traffic. Comment on blogs and social media posts, answer questions people are posting, and participate in conversations about your industry. The more you engage with your community, the more exposure and profile visits you get.
Landing pages are another free source of traffic to your website. These are pages specific to your offers, such as for redeeming a discount code, downloading a free guide, or starting a free trial. They contain the details users need in order to move forward and convert, and focus on one specific call to action, making it more likely to happen. Because landing pages are so specific, you can get very targeted in your messaging, increasing the traffic coming to those pages.
I feel I have great content…but most of it is within my email marketing campaign instead of my blogs. I’ve used my blogs to include links to my email marketing campaigns to lead to my product. In your opinion, should my blog content be the priority? I find my marketing emails sound more like a blog than just a “tip” or a reason to grab people to my list.
A quick search for “SEO ranking factors” will give you all of these answers and myriad others. There is a lot of information out there. And the reality is, while there are likely hundreds of variables working together to determine final placement, much of what is suggested is guesswork. And certainly, not all ranking factors are relevant to every business.
See the screenshot below for some of the sections for specific recommendations that you can add which will provide the meat of the document. Keep in mind this is a very flexible document – add recommendations that make sense (for example you may not always have specific design considerations for a project). Remember, it will be different every time you do it.
SEO may generate an adequate return on investment. However, search engines are not paid for organic search traffic, their algorithms change, and there are no guarantees of continued referrals. Due to this lack of guarantees and certainty, a business that relies heavily on search engine traffic can suffer major losses if the search engines stop sending visitors.[61] Search engines can change their algorithms, impacting a website's placement, possibly resulting in a serious loss of traffic. According to Google's CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes – almost 1.5 per day.[62] It is considered a wise business practice for website operators to liberate themselves from dependence on search engine traffic.[63] In addition to accessibility in terms of web crawlers (addressed above), user web accessibility has become increasingly important for SEO.
Hats off to your detailing and intelligence. I thoroughly enjoyed reading the post, very informative and engaging. I was actually applying them to see the amazing results. I also found a platform called soovledotcom which actually pulls keywords from amazon, e-bay, yahoo answer, wikipedia, google and bing, but your illustrations here will certainly yeild superior results for organic seo & finding keywords.
When referring to the homepage, a trailing slash after the hostname is optional since it leads to the same content ("https://example.com/" is the same as "https://example.com"). For the path and filename, a trailing slash would be seen as a different URL (signaling either a file or a directory), for example, "https://example.com/fish" is not the same as "https://example.com/fish/".
This was all free information I found online in less than an hour, that gives me some great ideas for content, partnerships and potential tools to build into my site to be relevant and useful to my target audience. Of course this is just some quick loose data, so I'll emphasize again: be careful where your data comes from (try to validate when possible), and think about how to use your data wisely.

I am a little confused on your first point. Sorry if it is a simple one to understand and I’m just missing it. What good would finding dead links on Wiki do for my personal website? I thought you would explain how to find dead links faster within my own site… but it seems that your tip is way more valuable than that. I just don’t quite understand what I do to positively affect MY site with this. Any help would be great 🙂 THANKS!

Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.


Thanks for bringing up this point - I agree Eric - competitive positioning can help you determine value that you bring to the table that your competitors dont.  I'm all for it.  Neilsen does some reports that provide awareness, likelihood to recommend, sentiment and other insightsfor your site/brand and your competitors. You can also pull some of that type of insight out of social listening platforms like NetBase, SM2, Radian6, Dow Jones, Nielsen, and so many others.  I've even done some hacked compeitove sentiment comprisons before using Search: searching for [brand or feature] + "like", "love", hate", "wish" etc. 
Hi Brian! I enjoy reading your posts and use as much info as I possibly can. I build and sell storage sheds and cabins. The problem I have is that there are no top bloggers in my market or wikipedia articles with deadlinks that have to do with my market. 95% of my traffic and sales are generated via Facebook paid advertising. Would love to get more organic traffic and would be interested in your thoughts concerning this.
In 2007, Google announced a campaign against paid links that transfer PageRank.[30] On June 15, 2009, Google disclosed that they had taken measures to mitigate the effects of PageRank sculpting by use of the nofollow attribute on links. Matt Cutts, a well-known software engineer at Google, announced that Google Bot would no longer treat any nofollow links, in the same way, to prevent SEO service providers from using nofollow for PageRank sculpting.[31] As a result of this change the usage of nofollow led to evaporation of PageRank. In order to avoid the above, SEO engineers developed alternative techniques that replace nofollowed tags with obfuscated JavaScript and thus permit PageRank sculpting. Additionally several solutions have been suggested that include the usage of iframes, Flash and JavaScript.[32]

Lastly, it's important to remember that paralysis by over-thinking is a real issue some struggle with. There's no pill for it (yet). Predicting perfection is a fool's errand. Get as close as you can within a reasonable timeframe, and prepare for future iteration. If you're traveling through your plan and determine a soft spot at any time, simply pivot. It's many hours of upfront work to get your strategy built, but it's not too hard to tweak as you go.
Hi Brian, i absolutely love your content.My competitors and influence rs are very strong-most of them government bodies or supported by government, or travel guides known worldwide.I constantly follow them,get onto them.like,share,comment etc.they share photos that are submitted to them,and i do photography myself which takes hours and still can’t reach out big audience…Any idea please what could i create that my influence rs would love to share (hard to find out what they care,they get 100’s photos submitted daily and collaborate with other big names…) Please help me.

In December 2009, Google announced it would be using the web search history of all its users in order to populate search results.[33] On June 8, 2010 a new web indexing system called Google Caffeine was announced. Designed to allow users to find news results, forum posts and other content much sooner after publishing than before, Google caffeine was a change to the way Google updated its index in order to make things show up quicker on Google than before. According to Carrie Grimes, the software engineer who announced Caffeine for Google, "Caffeine provides 50 percent fresher results for web searches than our last index..."[34] Google Instant, real-time-search, was introduced in late 2010 in an attempt to make search results more timely and relevant. Historically site administrators have spent months or even years optimizing a website to increase search rankings. With the growth in popularity of social media sites and blogs the leading engines made changes to their algorithms to allow fresh content to rank quickly within the search results.[35]


When Googlebot crawls a page, it should see the page the same way an average user does15. For optimal rendering and indexing, always allow Googlebot access to the JavaScript, CSS, and image files used by your website. If your site's robots.txt file disallows crawling of these assets, it directly harms how well our algorithms render and index your content. This can result in suboptimal rankings.
×