One thing to bear in mind is that the results we get from the calculations are proportions. The figures must then be set against a scale (known only to Google) to arrive at each page’s actual PageRank. Even so, we can use the calculations to channel the PageRank within a site around its pages so that certain pages receive a higher proportion of it than others.

When returning results on a SERP, search engines factor in the “relevance” and “authority” of each website to determine which sites are the most helpful and useful for the searcher. In an attempt to provide the most relevant results, the exact same search by different users may result in different SERPs, depending on the type of query. SERPs are tailored specifically for each user based their unique browsing history, location, social media activity and more.

Having a different description meta tag for each page helps both users and Google, especially in searches where users may bring up multiple pages on your domain (for example, searches using the site: operator). If your site has thousands or even millions of pages, hand-crafting description meta tags probably isn't feasible. In this case, you could automatically generate description meta tags based on each page's content.
For example, suppose you're a law firm targeting the phrase "divorce attorney" with a broad match ad. Your ad should appear on the results page for the search query "divorce attorney," but it could also show up for the phrases "reasons for divorce," "dui attorney" or "dealing with divorce for children." In these cases, you may be wasting money on irrelevant searches.
Keyword analysis. From nomination, further identify a targeted list of key­words and phrases. Review competitive lists and other pertinent industry sources. Use your preliminary list to determine an indicative number of recent search engine queries and how many websites are competing for each key­word. Prioritize keywords and phrases, plurals, singulars and misspellings. (If search users commonly misspell a keyword, you should identify and use it). Please note that Google will try to correct the term when searching, so use this with care.
To a spider, www.domain.com/, domain.com/, www.domain.com/index.html and domain.com/index.html are different urls and, therefore, different pages. Surfers arrive at the site’s home page whichever of the urls are used, but spiders see them as individual urls, and it makes a difference when working out the PageRank. It is better to standardize the url you use for the site’s home page. Otherwise each url can end up with a different PageRank, whereas all of it should have gone to just one url.
Establishment of customer exclusivity: A list of customers and customer's details should be kept on a database for follow up and selected customers can be sent selected offers and promotions of deals related to the customer's previous buyer behaviour. This is effective in digital marketing as it allows organisations to build up loyalty over email.[22]
The box on the right side of this SERP is known as the Knowledge Graph (also sometimes called the Knowledge Box). This is a feature that Google introduced in 2012 that pulls data to commonly asked questions from sources across the web to provide concise answers to questions in one central location on the SERP. In this case, you can see a wide range of information about Abraham Lincoln, such as the date and place of his birth, his height, the date on which he was assassinated, his political affiliation, and the names of his children – many of which facts have their own links to the relevant pages.
A variety of methods can increase the prominence of a webpage within the search results. Cross linking between pages of the same website to provide more links to important pages may improve its visibility.[47] Writing content that includes frequently searched keyword phrase, so as to be relevant to a wide variety of search queries will tend to increase traffic.[47] Updating content so as to keep search engines crawling back frequently can give additional weight to a site. Adding relevant keywords to a web page's meta data, including the title tag and meta description, will tend to improve the relevancy of a site's search listings, thus increasing traffic. URL normalization of web pages accessible via multiple urls, using the canonical link element[48] or via 301 redirects can help make sure links to different versions of the url all count towards the page's link popularity score.
Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters needed only to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server. A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains. All of this information is then placed into a scheduler for crawling at a later date.
×