It is important to remember that just because digital marketing uses different communications techniques to traditional marketing, its end objectives are no different from the objectives that marketing has always had. It can be easy to set objectives for digital marketing based around ‘vanity metrics’ such as number of ‘likes’ or followers, so it is useful to bear in mind this definition of marketing advanced by the Chartered Institute of Marketing:
Prioritizing clicks refers to display click ads, although advantageous by being ‘simple, fast and inexpensive’ rates for display ads in 2016 is only 0.10 percent in the United States. This means one in a thousand click ads are relevant therefore having little effect. This displays that marketing companies should not just use click ads to evaluate the effectiveness of display advertisements (Whiteside, 2016).[42]

More appropriately, blame Google for ever making the PageRank score visible. When Google first started, PageRank was something it talked about as part of its research papers, press releases and technology pages to promote itself as a smarter search engine than well-established and bigger rivals at the time — players like Yahoo, AltaVista and Lycos, to name a few.
Having a different description meta tag for each page helps both users and Google, especially in searches where users may bring up multiple pages on your domain (for example, searches using the site: operator). If your site has thousands or even millions of pages, hand-crafting description meta tags probably isn't feasible. In this case, you could automatically generate description meta tags based on each page's content.
Establishment of customer exclusivity: A list of customers and customer's details should be kept on a database for follow up and selected customers can be sent selected offers and promotions of deals related to the customer's previous buyer behaviour. This is effective in digital marketing as it allows organisations to build up loyalty over email.[22]
More appropriately, blame Google for ever making the PageRank score visible. When Google first started, PageRank was something it talked about as part of its research papers, press releases and technology pages to promote itself as a smarter search engine than well-established and bigger rivals at the time — players like Yahoo, AltaVista and Lycos, to name a few.
A breadcrumb is a row of internal links at the top or bottom of the page that allows visitors to quickly navigate back to a previous section or the root page. Many breadcrumbs have the most general page (usually the root page) as the first, leftmost link and list the more specific sections out to the right. We recommend using breadcrumb structured data markup28 when showing breadcrumbs.

What that means to us is that we can just go ahead and calculate a page’s PR without knowing the final value of the PR of the other pages. That seems strange but, basically, each time we run the calculation we’re getting a closer estimate of the final value. So all we need to do is remember the each value we calculate and repeat the calculations lots of times until the numbers stop changing much.
Page and Brin founded Google in 1998.[22] Google attracted a loyal following among the growing number of Internet users, who liked its simple design.[23] Off-page factors (such as PageRank and hyperlink analysis) were considered as well as on-page factors (such as keyword frequency, meta tags, headings, links and site structure) to enable Google to avoid the kind of manipulation seen in search engines that only considered on-page factors for their rankings. Although PageRank was more difficult to game, webmasters had already developed link building tools and schemes to influence the Inktomi search engine, and these methods proved similarly applicable to gaming PageRank. Many sites focused on exchanging, buying, and selling links, often on a massive scale. Some of these schemes, or link farms, involved the creation of thousands of sites for the sole purpose of link spamming.[24]
Google recommends that all websites use https:// when possible. The hostname is where your website is hosted, commonly using the same domain name that you'd use for email. Google differentiates between the "www" and "non-www" version (for example, "www.example.com" or just "example.com"). When adding your website to Search Console, we recommend adding both http:// and https:// versions, as well as the "www" and "non-www" versions.
An omni-channel approach not only benefits consumers but also benefits business bottom line: Research suggests that customers spend more than double when purchasing through an omni-channel retailer as opposed to a single-channel retailer, and are often more loyal. This could be due to the ease of purchase and the wider availability of products.[24]
“I have been working with Brick Marketing for over 4 years now. Brick Marketing sends me the reports every month, but I don’t need to read them. I already know what he does is extremely effective because of all the web requests I get, phone calls from customers when they see their page come up on the first page of Google! I have worked with many other companies that made promises they could not keep. Brick Marketing has gotten me results and that is why I continue to work with them. I don’t have to micro-manage anything they do. I know that they always do what they say they are going to do. If you are looking for an SEO company, I would say, look no further as you have found the one that will do the job right! In addition to doing an excellent job, Nick Stamoulis is a pleasure to work with.”
Robots.txt is not an appropriate or effective way of blocking sensitive or confidential material. It only instructs well-behaved crawlers that the pages are not for them, but it does not prevent your server from delivering those pages to a browser that requests them. One reason is that search engines could still reference the URLs you block (showing just the URL, no title or snippet) if there happen to be links to those URLs somewhere on the Internet (like referrer logs). Also, non-compliant or rogue search engines that don't acknowledge the Robots Exclusion Standard could disobey the instructions of your robots.txt. Finally, a curious user could examine the directories or subdirectories in your robots.txt file and guess the URL of the content that you don't want seen.
Search engine result pages are protected from automated access by a range of defensive mechanisms and the terms of service.[10] These result pages are the primary data source for SEO companies, the website placement for competitive keywords became an important field of business and interest. Google has even used Twitter to warn users against this practice[11]

Smartphone - In this document, "mobile" or “mobile devices" refers to smartphones, such as devices running Android, iPhone, or Windows Phone. Mobile browsers are similar to desktop browsers in that they can render a broad set of the HTML5 specification, although their screen size is smaller and in almost all cases their default orientation is vertical.


PageRank relies on the uniquely democratic nature of the web by using its vast link structure as an indicator of an individual page’s value. In essence, Google interprets a link from page A to page B as a vote, by page A, for page B. But, Google looks at considerably more than the sheer volume of votes, or links a page receives; for example, it also analyzes the page that casts the vote. Votes cast by pages that are themselves “important” weigh more heavily and help to make other pages “important.” Using these and other factors, Google provides its views on pages’ relative importance.
(1 - d) - The (1 – d) bit at the beginning is a bit of probability math magic so the “sum of all web pages' PageRanks will be one”: it adds in the bit lost by the d(.... It also means that if a page has no links to it (no backlinks) even then it will still get a small PR of 0.15 (i.e. 1 – 0.85). (Aside: the Google paper says “the sum of all pages” but they mean the “the normalised sum” – otherwise known as “the average” to you and me.

The PageRank formula also contains a damping factor (d) . According to the PageRank theory, there is an imaginary surfer who is randomly clicking on links, and at some point he gets bored and eventually stops clicking. The probability that the person will continue clicking at any step is a damping factor. Thus, this factor is introduced to stop some pages having too much influence. As a result, their total vote is damped down by multiplying it by 0.85 (a generally assumed value).


In an effort to manually control the flow of PageRank among pages within a website, many webmasters practice what is known as PageRank Sculpting[63]—which is the act of strategically placing the nofollow attribute on certain internal links of a website in order to funnel PageRank towards those pages the webmaster deemed most important. This tactic has been used since the inception of the nofollow attribute, but may no longer be effective since Google announced that blocking PageRank transfer with nofollow does not redirect that PageRank to other links.[64]
The third and final stage requires the firm to set a budget and management systems; these must be measurable touchpoints, such as audience reached across all digital platforms. Furthermore, marketers must ensure the budget and management systems are integrating the paid, owned and earned media of the company.[68] The Action and final stage of planning also requires the company to set in place measurable content creation e.g. oral, visual or written online media.[69]
Early versions of search algorithms relied on webmaster-provided information such as the keyword meta tag or index files in engines like ALIWEB. Meta tags provide a guide to each page's content. Using metadata to index pages was found to be less than reliable, however, because the webmaster's choice of keywords in the meta tag could potentially be an inaccurate representation of the site's actual content. Inaccurate, incomplete, and inconsistent data in meta tags could and did cause pages to rank for irrelevant searches.[10][dubious – discuss] Web content providers also manipulated some attributes within the HTML source of a page in an attempt to rank well in search engines.[11] By 1997, search engine designers recognized that webmasters were making efforts to rank well in their search engine, and that some webmasters were even manipulating their rankings in search results by stuffing pages with excessive or irrelevant keywords. Early search engines, such as Altavista and Infoseek, adjusted their algorithms to prevent webmasters from manipulating rankings.[12]
×