Thursday, 13 February 2014

History of search engines

Before there was Yahoo! Before there was Webcrawler. Before there was AltaVista. There were Archie, Jughead, and Veronica (but no Betty). Before 1990, there was no way to search the Internet. At that time there were few websites. Most sites contained collections of files that you could download (by FTP) if you knew that they were there. The only way you could find out that a file was on a specific site was by word-of-mouth. Then came Archie. Created by Peter Deutsch, Alan Emtage and Bill Wheelan, Archie was the first program to scour the Internet for the contents of all of the anonymous FTP sites all over the world. It is not a true search engine but like Yahoo, it is a searchable list of files. You needed to know the exact name of the file that you were looking for. Armed with that information, Archie would tell you from which FTP site you could download the file.
If Archie was the grandfather of all search engines, then Veronica was the grandmother. Developed by the University of Nevada Computing Services, it searched Gopher servers for files. A Gopher server stores plain-text documents while an FTP server stores other kinds of files (images, programs, etc.). Jughead performed functions similar to Veronica.
By 1993, the Web was beginning to change. Rather than being populated mainly by FTP sites, Gopher sites, and e-mail servers, web sites began to proliferate. In response to this change, Matthew Gray introduced his World Wide Web Wanderer. The program was a series of robots that hunted down web urls and listed them in s database called Wandex.
Again around 1993, ALIWEB was developed as the web page equivalent to Archie and Veronica. Instead of cataloging files or text documents, webmasters would submit a special index file with site information.
The next development in cataloging the web came late in 1993 with spiders. Like robots, spiders scoured the web for web page information. These early versions looked at the titles of the web pages, the header information, and the URL as a source for key words. The database techniques used by these early search engines were primitive. For example, a search would yield hits in the order that the hits were in the database. Only one of these search engines made any attempt to rank the hits according to the sites' relationships to the key words.
The first popular search engine, Excite, has it roots in these early days of web cataloging. The Excite project was begun by a group of Stanford undergraduates. It was released for general use in 1994.
Again in 1994, two Stanford Ph.D. students posted web pages with links on them. They called these pages Yahoo!. As the number of links began to grow, they developed a hierarchical listing. As the pages become more popular, they developed a way to search through all of the links. Yahoo! became the first popular searchable directory. It was not considered a search engine because all the links on the pages were updated manually rather than automatically by spider or robot and the search feature searched only those links.
The first full-text search was WebCrawler. WebCrawler began as an undergraduate seminar project at the University of Washington. It became so popular that is virtually shut down the University of Washington's network because of the amount of traffic it generated. Eventually, AOL bought it and operated it on their own network. Later, Excite bought WebCrawler from AOL but AOL still uses it in their NetFind feature. At Home Corp. currently owns Webcrawler (as well as Excite and Blue Mountain Cards).
The next search engine to appear on the web was Lycos. It was named for the wolf spider (Lycosidae lycosa) because the wolf spider pursues its prey. According to Michael Maudlin in Lycos: Design choices in an Internet search service" (1997), by 1997, Lycos had indexed more than 60,000,000 web pages and ranked 1st on Netscape's list of search engines.
The next major player in the search engine wars as it was becoming was Infoseek. The Infoseek search engine itself was unremarkable and showed little innovation beyond Webcrawler and Lycos. What made this search engine stand out was its deal with Netscape to become the browser's default search engine replacing Yahoo!.
By 1995, Digital Equipment Corporation (DEC) introduced AltaVista. This search engine contained some innovations that set it apart from the others. First, it ran on a group of DEC Alpha-based computers. At the time, these were among the most powerful processors in existence. This meant that the search engine could run even with very high traffic hardy slowing down. (The DEC Alpha processor ran a version of UNIX. From its inception, UNIX had been designed for such heavy multi-use loads.) It also featured the ability for the user to ask a question rather than enter key words. This innovation made it easier for the average user find the results needed. It was also the first to implement the use of Boolean operators (and, or, but, not) to help refine searches. It also gave tips to help he user refine searches.
Next came HotBot, a project from the University of California at Berkeley. Designed as the most powerful search engine, its current owner, Wired Magazine claims that it can index more than 10,000,000 pages a day. Wired claims that HotBot should be able to update its entire index daily making it contain the most up-to-date information of any major search engine. (You'll have the opportunity to test that claim if you wish.)
In 1995, a new type of search engine was introduced - the metasearch engine. The concept was simple. The metasearch engine would get key words from the user either by the user typing key words or a question and then forward the keywords to all of the major search engines. These search engines would send the results back to the metasearch engine and the metasearch engine would format the hits all on one page for concise viewing. The first of these search engines was Metacrawler. Metacrawler initially ran afoul of the major search engines because Metacrawler took the output of the search engines but not the advertising banners that users of the search engines see reducing the advertising revenues of the search engine companies.. Metacrawler finally relented and began including the banner ads with each set of search results.
Besides Metacrawler, other major metasearch engines exist including ProFusion, Dogpile, Ask Jeeves, and C-Net's Search.com. Ask Jeeves combines many of the features such as natural language queries with the ability to search using several different search engines. C-Net's entry claims to use over 700 different search engines to obtain its results. Although conceptually very good, the searches using a metasearch engine are only as good as the underlying search engines and directories and the question that the user asks..

What is a Search Engine?

search engines

Search engines are programs that search documents for specified keywords and returns a list of the documents where the keywords were found. A search engine is really a general class of programs, however, the term is often used to specifically describe systems like Google, Bing and Yahoo! Search that enable users to search for documents on the World Wide Web.

Web Search Engines

Typically, Web search engines work by sending out a spider to fetch as many documents as possible. Another program, called anindexer, then reads these documents and creates an index based on the words contained in each document. Each search engine uses a proprietary algorithm to create its indices such that, ideally, only meaningful results are returned for each query.

HOW DO SEARCH ENGINES WORK?

Every search engine uses different complex mathematical formulas to generate search results. The results for a specific query are then displayed on the SERP. Search engine algorithms take the key elements of a web page, including the page title, content and keyword density, and come up with a ranking for where to place the results on the pages. Each search engine’s algorithm is unique, so a top ranking on Yahoo! does not guarantee a prominent ranking on Google, and vice versa. To make things more complicated, the algorithms used by search engines are not only closely guarded secrets, they are also constantly undergoing modification and revision. This means that the criteria to best optimize a site with must be surmised through observation, as well as trial and error — and not just once, but continuously.

Why we need Search Engines?

Search engines are commonly used because they allow an individual to search through millions of websites within a very short period of time. They also index the data that is collected therefore greatly reducing the time spent on research.


  1. Internet Marketing and SEO strategies are the only form of marketing that puts your business, product or service in front of your targeted market and prospective customers who are actively seeking exactly what your company offers.
  2. Brand Awareness and increase visibility is important as 8 out of 10 people using the internet to find a product or service eventually does business online.
  3. Your competition is not sitting on the side line waiting to see what others will be doing as the stats speak for themselves, Web SEO utilization is at 39% of all e-commerce websites, up from 19% in 2007.
  4. Get your Website working for you and get back your ROI. Your website and Web Marketing applying SEO Optimization strategies works for you 24/7 365 days a years vs traditional ads on the Radio, Newspaper, Tv and Billboards are time sensitive.
  5. Businesses are switching from printed media to e-media advertising, it has been noted in a Business Journal an estimated $25B will be leaving print media and switching directly to Web marketing and SEO strategies.
  6. In order to get a better seat get on the bus before the others, 39% of all website have received a form of web optimization. Web marketing budget dollars is 8% of all media advertising spending and is estimated to almost double to 15% by 2013. The next step is getting started with Web SEO.
  7. Organic search engine results are 85% of all end users clicks, as opposed to only 15% for sponsored ads, like Pay Per Click (PPC).
  8. If you don’t your competition will and you will be left behind. Internet marketing and Web SEO stats are up an average of 9% a year through 2011. Market growth of 17B from 2007 to 2011 ($41B to $69B).
  9. E-Commerce alone has seen Double Digit growth of 16.5% in 2010.
  10. The tool of choice to solving our unanswered questions, finding a product or service is done through the help of an internet search engine like google. Nearly 250 million searches are performed per day on Google alone. This number is trending upwards since the introduction of 3G or higher wireless networks, providing fast mobile web browsing with the smart phones and laptop with wireless sticks.

Wednesday, 12 February 2014

Types of CMS

Types of CMS Websites

Some of the different types of CMS websites are as follows:
  • Web Content Management System: Used where a standalone application is required to create, manage, store and organize the website content. Web content may have photos, videos, audios and text to interact with users. Indexing, assembling the content at runtime and delivering the requested content to the user are the three main functionalities of Web Content Management System. Web CMS is a boon to non-technical users as it enables them to edit their website without actually knowing hardcoding.
  • Component Content Management System: Content here is more structured and is called as a component. Every component has its own lifecycle of Authoring, Versioning, Approving and Using. In addition to versioning, Component Content Management System helps tracking relationships of the content like graphics, text etc. with each other.
  • Enterprise Content Management System: As the name suggests Enterprise Content Management System, deals with huge data, primarily of bigger enterprises. It involves organizing day to day documents of an enterprise through structured methodology. The content management application which is one of the parts of Enterprise Content Management System helps the user to add, modify and remove the content without practical intervention of web administrator. The content delivery application gathers the information, compiles it and displays it on the website.

Advantages of Using CMS

Main Advantages of CMS Website

Some of the advantages of using CMS website are as follows:
  • Centralized System: A Centralized system brings all your data under one section which serves as centralized repository. Without any such system, the data might get scattered resulting in redundancy.
  • Accuracy: All the content in CMS has to be stored only once, which can be reused multiple times, giving flexibility of usage. Additionally, CMS keeps track of content reuse and relevant updates to the content if any, thus, keeping the content updated and accurate.
  • Secured Usage: By assigning user privileges, it is easy to keep the data secure through which only authorized people are allowed to edit the content.
  • SEO friendly: Best practices for search engine optimization like meaningful URL’s, inclusion of page titles, correct metadata etc. are prevalent with the use of CMS.
  • Low Cost: Some Content Management Platforms like Drupal, WordPress, Joomla! are open source while for others minimal cost can be incurred.

What is CMS(Content Management Systems)

Web definitions 
Programs responsible for the creation of a site’s framework including image media, audio files, web content, skins, and many others. ...
A content management system (CMS) is a system used to manage the content of a Web site. Typically, a CMS consists of two elements: the content management application (CMA) and the content delivery application (CDA). The CMA element allows the content manager or author, who may not know Hypertext Markup Language (HTML), to manage the creation, modification, and removal of content from a Web site without needing the expertise of a Webmaster. The CDA element uses and compiles that information to update the Web site. The features of a CMS system vary, but most include Web-based publishing, format management, revision control, and indexing, search, and retrieval.
The Web-based publishing feature allows individuals to use a template or a set of templates approved by the organization, as well as wizards and other tools to create or modify Web content. The format management feature allows documents including legacy electronic documents and scanned paper documents to be formatted into HTML or Portable Document Format (PDF) for the Web site. The revision control feature allows content to be updated to a newer version or restored to a previous version. Revision control also tracks any changes made to files by individuals. An additional feature is indexing, search, and retrieval. A CMS system indexes all data within an organization. Individuals can then search for data using keywords, which the CMS system retrieves.

What is W3C ?

 The W3C is the official international standards organisation for the 'World Wide Web.' The initials stand for World Wide Web consortium. Based in the Massachusetts Institute of Technology (MIT), this organisation is responsible for developing and establishing commonly agreeable standards in all applications to do with the web.

Stands for "World Wide Web Consortium." The W3C is an international community that includes a full-time staff, industry experts, and several member organizations. These groups work together to develop standards for the World Wide Web.
The mission of the W3C is to lead the Web to its full potential by developing relevant protocols and guidelines. This is achieved primarily by creating and publishing Web standards. By adopting the Web standards created by the W3C, hardware manufacturers and software developers can ensure their equipment and programs work with the latest Web technologies. For example, most Web browsers incorporate several W3C standards, which allows them to interpret the latest versions of HTML and CSS code. When browsers conform to the W3C standards, it also helps Web pages appear consistent across different browsers.
Besides HTML and CSS standards, the W3C also provides standards for Web graphics (such as PNG images), as well as audio and video on the Web. The organization also develops standards for Web applications, Web scripting, and dynamic content. Additionally, the W3C provides privacy and security guidelines that websites should follow.
The World Wide Web Consortium has played a major role in the development of the Web since it was founded in 1994. As Web technologies continue to evolve, the W3C continues to publish new standards. For example, many of the technologies included in Web 2.0 websites are based on standards developed by the W3C. To learn more about the W3C and the current standards published by the organization, visit the W3C website.