The PageRank algorithm outputs a probability distribution used to represent the likelihood that a person randomly clicking on links will arrive at any particular page. PageRank can be calculated for collections of documents of any size. It is assumed in several research papers that the distribution is evenly divided among all documents in the collection at the beginning of the computational process. The PageRank computations require several passes, called “iterations”, through the collection to adjust approximate PageRank values to more closely reflect the theoretical true value. Cartoon illustrating the basic principle of PageRank. The size of each face is proportional to the total size of the other faces which are pointing to it.[/caption]

Let’s assume that it is a logarithmic, base 10 scale, and that it takes 10 properly linked new pages to move a site’s important page up 1 toolbar point. It will take 100 new pages to move it up another point, 1000 new pages to move it up one more, 10,000 to the next, and so on. That’s why moving up at the lower end is much easier that at the higher end.
Nearly all PPC engines allow you to split-test, but ensure that your ad variations will be displayed at random so they generate meaningful data. Some PPC platforms use predictive algorithms to display the ad variation that's most likely to be successful, but this diminishes the integrity of your split-test data. You can find instructions on how to ensure that your ad versions are displayed randomly in your PPC engine's help section.
AdWords Customer Match lets you target customers based on an initial list of e-mail addresses. Upload your list and you do things like serving different ads or bidding a different amount based on a shopper’s lifecycle stage. Serve one ad to an existing customer. Serve another to a subscriber. And so on. Facebook offers a similar tool, but AdWords was the first appearance of e-mail-driven customer matching in pay per click search.
Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed only to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server. A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains. All of this information is then placed into a scheduler for crawling at a later date.
×